# How does adding an ABox affect reasoner performance?

Hi All,

As we know, the profiles of OWL 2 are based on Description Logic, for instance, OWL 2 DL profile is underpinned by the SROIQ description logic. This is done mostly to ensure a known (and acceptable) performance of standard reasoning tasks that are implemented on the TBox, i.e, on the ontologies that are created by using the language constructs provided by this particular OWL 2 profile.

My question is: what will happen to the performance after ABox data is added on top of the TBox? How does that change the performance of the reasoning tasks? where can I find more information on this?

And discussion/hint on this regard is very much appreciated!

Take a look at the computational properties of the OWL profiles (Table 10). It tells you all everything you always wanted to know about complexity (which you are not afraid to ask) :)

OWL 2 RL and OWL 2 QL both have performance bounds for both A-Box and T-Box queries.

What do you learn from knowing that OWL 2 DL has an asymptotic worst-case runtime complexity of 2-NExp-Time? When you look into the list of complexities cited by Antoine, you see that this is about roughly 2^(2^n) computational steps for input of size n. And this meant for the case of using a non-deterministic computer. Such kind of computers do not exist (yet, we're still all waiting for quantum computers!) so, provided P != NP (and, therefore, 2-EXP-Time != 2-NExp-Time), you can probably add one more exponential level in order to use an "ordinary" deterministic machine to simulate the non-deterministic one. This results in roughly

``````2^(2^(2^n)) steps
``````

Just to get an idea of what this means: For an absurdly tiny input size of n = 4 (an ontology of 4 axioms, or 4 classes, or whatever, i.e., a typical spot test case for OWL reasoners :-)), we get

``````2^(2^(2^4) = 2^(2^16) = 2^65536 > 10^19728
``````

i.e., a number with almost 20,000 decimal digits. Just for comparison, according to Wolfram Alpha, the estimated number of atoms in the universe is about 10^80, i.e., a number with "only" 80 decimal digits!

So, stuff like this is pretty much all you can learn from the complexity results for decidable logic formalisms (or any other computational languages). That's not particularly useful, right? What you certainly really want to know is what runtime you can expect in practice. But you won't find any theoretical results about what can happen in practice.

The only thing you can do is check for results of evaluations that have been conducted by people for certain practical input scenarios. Typically, the results will in these cases be muuuuuuch better than the numbers from the complexity theory. In some rare cases, things may really go bad, but I don't care much about this as long as the system works well on most of the input. In any case, complexity theory does not give us any helpful advice about the expected runtime behaviour. We have to test, test, test! That's the only way we can get any helpful information about the runtime behaviour of a particular reasoner on a particular kind of input data.