How do I build a reasoner?

If I wanted to do semantic web application development in [some obscure language] and nothing is currently available, where would I go to find out how they work and how to build a production quality reasoner for working with RDFS and/or OWL in my obscure language?

Also, what API standards are out there that I ought to adhere to?

UPDATE

Introduction to DLs by Tommie Meyer

First you need to narrow the question a little to the particular semantics and language fragments you are interested in.

The interesting choices are:

  • OWL DL via a standard tableaux algorithm as mention by John and Bastian and discussed at tabling/tableaux reasoners.
  • OWL EL, which is rather simpler to implement than the whole of DL and a lot more tractable in terms of performance - see, for example, CEL.
  • RDFS or OWL RL via rules

I'm assuming that building a full theorem prover is is too big a job for just shifting to a new language so we'll skip that option :)

Since you mention RDFS and since, in any case, rule based reasoners are the easiest way to get started with building inference for a new language then I'd suggest focusing on that branch. So now we need some sort of rule language and engine and a rule set. For the language I'd recommend thinking in terms of simple datalog rules such as those which can be expressed in RIF Core. The rule set is reasonably easy. The rules for RDFS and OWL-RL are in the specs anyway but assuming you want some fraction of OWL-RL then take a look at OWL 2 RL in RIF. While that provides the complete OWL 2 RL rule set in RIF syntax (appendix 9) the translation-based algorithm (appendix 8) will give you better performance.

So now we just need a rule engine. There are a couple of more branch points to decide on - forward/backward chaining, in-memory or database-backed?

For a first pass then in-memory solutions are certainly easier to implement. If you do need a database solution then look at the literature on datalog. In particular most texts on databases will have some description of bottom up evaluation (naive and semi-naive rule closure) where you iteratively run the rules over your existing tables assert new joined results into an updated predicate table (in semi-naive you use the partial results so far during the iteration). For example Ulman's lecture notes are a good starting place. Then look at the literature on "magic sets" which is a method for combining a query with a rule set to produce a custom rule set for that query which you can then implement bottom up.

So now we are down to in-memory, datalog rules. So do you want forward chaining or backward chaining? Forward chaining is the most efficient if you might eventually want "most" of the possible entailments, backward chaining is most efficient if you only want to answer a few queries which touch on a small fraction of the possible entailments.

For backward chaining then look at the general literature on prolog interpreters first, then study the abstract machines for efficient prolog. The best starting reference for that is Hassan Ait-Kaci's book on the Warren Abstract Machine (WAM). My experience is that it is really helpful to have tabling (where partial results for a given goal are stored) which can be both more efficient and more forgiving in avoiding infinite recursions for bad orderings of expressions. Sort of a compromise between forward and backward chaining. If you want to go that route then search the literature for SLG and in particular SLG-WAM, an extension to the WAM that supports tabling (and negation) as implement in XSB.

However, when starting from scratch arguably the best choice is to go for forward chaining. In that case the easiest implementation approach is to simply iterate over rules matching the set of triple patterns in the condition parts of your rules and asserting the consequent triple patterns (e.g. using SPARQL CONSTRUCTS), repeat until there are no new answers. For very simple rulesets with little recursion this trivial approach can actually work pretty well. However, typically (and certainly for things like OWL-RL) there is enough recursion you need to work a bit smarter. The standard approach to this is the RETE algorithm where you represent the rules as a network and store tokens representing partially matched patterns within the network. When new facts are added (as a result of some rule firing) then the network can efficiently work out the incremental new results without having to rework the previous partial matches. If you want to go further there are several different optimizations and improvements to RETE including various approaches to lazy evaluation within the network, to set-based operators and to non-binary nodes - e.g. look at Gator.

If this walk through set of choices isn't right for you then backtrack and try some of the other branches :)

OWL and OWL 2 are based on description logics and there are numerous papers out there on tableau algorithms for reasoning over ontologies (google is your friend :)).

A good place to start would be here:

http://www.cs.manchester.ac.uk/~sattler/reasoners.html

Lots of DL/OWL reasoner links and you'll find many are open source and/or have papers documenting the algorithms used.

For more about description logic in general go here:

http://dl.kr.org/

I would start to check out the ALC description logic, which is one of the most simple DLs, and then try to implement some simple reasoning algorithms for it, like the one described in the "EXPTIME tableaux for ALC" paper by Giacomo et al., to gain a feeling and deeper understanding for the matter.

Edit: In the meantime, there was another question on Tableaux algorithms which might also be of interest for people trying to develop a reasoner on their own.

In terms of standards to adhere to, the W3C Rules Interchange Format (RIF) documents should be on your reading list.

To bring something new to the everlasting either-tableaux-or-rules discussion (and to this old question :)), here is a pretty straight-forward approach for doing OWL 2 Direct Semantics [1] reasoning based on standard first-order theorem proving. It has both advantages and limitations compared to typical OWL DL reasoning approaches based on tableaux algorithms. I will use the online theorem prover Tau [2] for demonstration purposes, but one could use any off-the-shelf theorem prover.

Assume you have two OWL 2 ontologies O1 and O2 given in OWL 2 Functional Syntax [3]. If they are given in RDF, first map [4] them to Functional Syntax. You can do this programatically with the OWL API [5], or manually with the Manchester OWL 2 Syntax Converter [6], which is based on the OWL API.

The first step is to apply the definitions given in the OWL 2 Direct Semantics specification [1] to translate all the axioms of the two ontologies into standard first-order logic. The definitions in the spec are a bit of a first-order logic/set theory mix, but it should not be hard to get it done. We call the resulting ontologies "FOL(O1)" and "FOL(O2)", respectively. See below for a simple example based on Tau. One would typically write a syntax converter for translating from Functional Syntax into the target first-order logic syntax (different theorem provers use different syntaxes), but in the example below we are doing the translation manually.

Next, we can ask whether O1 entails O2. FOL(O1) will be the premise knowledge base, while FOL(O2) will be the query to be proved in the context of FOL(O1). The example below shows how this can be done with Tau. In any case, if the entailment holds, the theorem prover will say "yes", unless it breaks for some resource limit reasons.

Here are the limitations of this approach.

Since first-order logic is undecidable, one cannot always expect a safe "no" for non-entailments. Nevertheless, one will get a "no" at least sometimes, if the theorem prover has some built-in means for non-entailment detection. This will, of course, always be incomplete. Tau seems to contain such a procedure [FIXME], since I have seen it saying "disproved" from time to time, but often I receive "unresolved" as a result. However, one can typically expect a "yes" for any positive entailment, since first-order logic is semi-decidable. Resource limitations may occasionally be a show killer even for positive entailments, but this is not different as for tableaux-based DL reasoners (or for any other software).

Another limitation is performance. First-order provers are much more generic than DL reasoners, which have been designed for a specific fragment of first-order logic. So no one should expect first-order prover-based OWL 2 reasoners to be as fast as specialized DL reasoners. Whether "not as fast" is still "fast enough" will entirely depend on one's application, so one will have to do evaluation (instead of theorizing).

Here are the advantages of this approach.

One doesn't have to care about all those syntactic restrictions of OWL 2 DL that have been introduced to guarantee decidability (see the second item list in Chap 3 of [3]). By choosing to use a theorem prover for OWL reasoning one has actually given up decidability, anyway, so one can safely forget about all those constraints. With this approach, one can, for example, make the "uncle" property (definable by a sub property chain axiom) an asymmetric and irreflexive property (disallowed in OWL 2 DL), and whatever is entailed from this via the OWL 2 Direct semantics will be affirmatively answered by our reasoning approach (modulo resource limitations, of course). However, if one's application really depends on non-entailment detection or consistency detection (i.e. if one isn't already happy with "unresolved" after some time of hard checking for entailment), then this kind of sloppiness can become a problem... sometimes at least.

To summarize:

The theorem-prover approach is relatively easy to implement, basically by encoding the definitions in the Direct Semantics spec [1] for translating OWL 2 Functional-Syntax ontologies [3] into the input encoding of the target first-order theorem prover. This approach has the advantage of being able to (potentially) detect more OWL entailments than specialized OWL 2 DL reasoners, and the disadvantage of not having the guarantee to detect all non-entailments; and one will probably get worse performance. It's a trade-off!

Btw, first-order theorem provers typically do not apply tableaux methods, at least not as their primary reasoning procedure. Many use resolution [8], some also use other approaches. There are even a few papers around that are about using theorem provers for description-logic-style reasoning, such as [9,10] and, while they find issues, they do not report on total disaster of this approach at least.

Now, here is the example. The two ontologies are given as follows:

O1:

SubClassOf(C D)

ClassAssertion(C u)

O2:

ClassAssertion(D u)

Tau [2] uses the lisp-style first-order logic dialect KIF [11] as its input language. Translating the two ontologies into KIF results in:

FOL(O1):

(forall (?x) (=> (C ?x) (D ?x)))

(C u) 

FOL(O2):

(D u)

Next, you can ask whether O1 entails O2.

1) Create a new KB with name "O1" (press "Create").

2) Add the two formulas of FOL(O1) by first pressing "Add" and then pasting the above text into the text field; press "Assert".

3) Check the correct content by pressing "Display"; should show the two FOL(O1) formulas.

4) Press "Prove"; and make sure that "Premise KB" shows "O1", not "NONE". Put FOL(O2) into the text field and press "Prove".

The result should be shown on the right-hand side, and it should say "Proved.", in friendly green letters.

[1] http://www.w3.org/TR/2009/REC-owl2-direct-semantics-20091027/

[2] http://www.hsinfosystems.com/taujay

[3] http://www.w3.org/TR/2009/REC-owl2-syntax-20091027

[4] http://www.w3.org/TR/2009/REC-owl2-mapping-to-rdf-20091027/

[5] http://owlapi.sourceforge.net/

[6] http://owl.cs.manchester.ac.uk/converter/

[7] http://www.w3.org/Submission/SWRL/

[8] http://en.wikipedia.org/wiki/Resolution_%28logic%29

[9] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.103.2906

[10] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.62.2929

[11] http://logic.stanford.edu/kif/

Jos de Roo's Euler reasoner [1] is a general purpose method. Good starting place.

[1] http://www.agfa.com/w3c/euler/