Why Did Freebase Give Up On RDF?

If I just look at the structure of the Freebase quad dump, it's hard to believe there's a big philosophical difference between Freebase and RDF. The value/destination distinction seems to be a way to implement a slightly deviant form of namespacing and language tagging -- a different way to do something similar. They've got extra data about facts in their system (user id of creator and timestamps) to support the 'data wiki' but it seems a triple store could be modified to support this.

Even if you think MQL is a better language than SPARQL, it seems to me that a language very much like MQL could be built on top of a triple store.

Colin Evans and Jamie Taylor wrote a book about (mainly) RDF technology, so I don't believe they "went their own way" out of unfamiliarity -- they had a reason. Yet, looking at RDF tools in 2011 I feel like a kid in a candy store, and wonder how good their decision was in retrospect.

Well, when Freebase first went public in early 2007, SPARQL wasn't even a W3C standard yet, and off-the-shelf RDF tech din't look remotely as appealing as it does now. Metaweb was never an RDF shop. They added rdf.freebase.com somewhere along the way, which is great, but that's likely just some web front-end sitting in front of their non-RDF store.

A better question to ask may be: When will they finally add a SPARQL endpoint? ;-)

Well, they didn't quite "give up" on RDF completely, seeing as there's rdf.freebase.com, a web service to retrieve an RDF description of any Freebase resource. I've recently been using that service (in combination with some simple MQL queries to find identifiers matching certain keywords) quite succesfully to integrate their RDF with other RDF.

But you're right that they don't support a SPARQL endpoint, which is a real shame since it would certainly make integration a lot easier. I briefly looked into some form of on-the-fly conversion between MQL and SPARQL, but while this is easy enough to do for any particular SPARQL query, it's quite hard to generalize.

Freebase is still available - enthusiastically - in RDF; see dumps at http://wiki.freebase.com/wiki/Data_dumps#RDF_dump

There is work ongoing to improve the quality and usefulness of those files. For e.g. would you all prefer Turtle or N-Triples? One big file or several smaller ones? Is it better to have a more complete dump with some boring, trivial or unimportant triples, or to have a smaller and less heavy subset?

RDF is important to Google. See also http://schema.org/ ...

Pure speculation:

Freebase is owned by Metaweb which in turn was bought by Google (around a year ago). Google seems to be avoiding RDF, which might be why things haven't progressed. One could therefore probably flaunt a whole load of conspiracy theories around Google's motive for acquiring Metaweb.