Which Tools and Libraries do you use to develop Semantic Web applications?

Which tools in which languages do you use for development ? Are they open source or commercial ? Which features do you like about them ?

I think this is a good question, to gather some overview of available tools which can be helpful for beginners. I'll be incorporating the answers into the question, as they come.


  • Apache Jena - a framework providing support for RDF parsing, storage and querying (SPARQL)
  • Fuseki - a SPARQL server for Jena
  • Pellet - an open source OWL reasoner
  • Open Virtuoso - a object-relational SQL database, supporting RDF storage and SPARQL querying
  • OpenRDF Sesame - an extensible Java framework and web server for RDF parsing, storage and (SPARQL) querying
  • TopBraid Suite - suite of integrated semantic solution products and application development toolkits
  • Stardog - RDF database supporting scalable reasoning for RDFS & all OWL2 profiles


  • ARC2 - a semantic web PHP framework
  • Graphite - an open source PHP library, built on top of ARC2, to make it easy to do stuff with RDF data really quickly, without having to naff around with databases


  • Redland - a set of libraries for RDF handling and querying


  • rdflib - a library for handling RDF
  • librdf - wrapper for Redland library
  • SuRF - object RDF mapper



  • Dydra - a service for cloudbased storing and querying of semantic data
  • Command line (conversion, validation, query): apt-get install redland-utils;
  • Conversion webservices: Morph, triplr;
  • ORM (Python): RDFAlchemy and SURF;
  • General Python framework: rdflib;
  • Namespace lookup: prefix.cc;
  • Quick registration of properties: OpenVocab;
  • Testing schema publication/linked data: Vapour;
  • Browsing links, de-referencing, ... : Tabulator.

Not necessarily complete, but still a good reference for tools is the tool list maintained at W3C. And it is a wiki, so if something is missing, add it!

There are, of course, commercial solutions available as well.

My company, Cambridge Semantics, offers a few products known collectively as Anzo that our customers use to build solutions using Semantic Web technologies. The center of our software is our server component, which is also available as open-source at http://openanzo.org . The server provides:

  • Client development libraries in Java, JavaScript, and .NET
  • A named-graph-based RDF store
  • Access control at the named graph level
  • Versioning (keep past histories of graphs)
  • Notification (subscription to real-time events when graphs or patterns change)
  • Replication (maintain local caches/replicas of subsets of the server database)
  • Standard stuff: read graphs, write graphs, query graphs with SPARQL

We've built and sell two products on top of this server:

  • Anzo for Excel - take arbitrary spreadsheet data and link it to schemas/ontologies, turning the spreadsheet into a user interface for RDF data (and pulling RDF data out of existing spreadsheets)
  • Anzo on the Web - let non-technical users build views (lenses) of their RDF data with an Exhibit-like faceted-browsing interface

In PHP, ARC, Moriarty, with Talis store as backend, morph.talis.com as JSONP proxy, and prefix.cc for cutting and pasting namespace URIs (rdf, rdfs and owl ns uris are long to type). RdfQuery for javascript ?

From the command line I use redland's rapper utility. I find it really handy for exploring a linked data space, and outputting stuff as Turtle.

rapper -o turtle http://dbpedia.org/resource/Semantic_Web

When I'm wanting to look at some bigger pools of rdf I've used python's rdflib, with the Sleepycat backend, which can handle millions of triples easily, and is very simple to use. SPARQL support isn't really all there...but it has nice methods for digging into the triples from python, and that has been enough for me.

[email protected]:~$ python

Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import rdflib
>>> graph = rdflib.Graph('Sleepycat')
>>> graph.open('my-store', create=True)
>>> graph.parse('http://dbpedia.org/resource/Semantic_Web')
>>> for object in graph.objects(predicate=rdflib.namespace.OWL.sameAs):
… print object


When it comes to publishing semantic web data I've been taking the somewhat lazy approach of focusing on having good RESTful practices and converting to an RDF serialization as late as possible--just like people do with HTML representations. This means I am not really limited to a semantic web technology stack, and can use more familiar web frameworks (django, rails, etc) with relational or nosql database backends. Maybe this isn't really isn't "doing" semantic web software development, but it suits me ok :-)

For most day to day activities I'm using the Talis Platform, accessed by my Ruby client Pho. (Disclaimer: I work for Talis) Pho also includes a general purpose SPARQL client.

I also use Redland very heavily both on the command-line (rapper is great) and via its Ruby bindings which I use in Pho.

When programming in Java I use Jena and in particular its TDB backend which is a native triple store. TDB has some command-line scripts that make it very easy to process, munge, and query data from the command-line.

Obviously I am biased because I'm CTO at Talis, but I use the Talis Platform (http://www.talis.com/platform)

I'm using several triple stores. And actually still evaluating them. So far I got some experience with:

Currently looking into: