What inferencing strategy does the TopBraid SPIN implementation use?

Is it always forward chaining with materialization, or some form of query rewriting or a combination of both? Also can an arbitrary SPARQL endpoint support SPIN with just the open-source ARQ-based API?

I believe it is forward chaining with and without materialization. You import your rule set, as a set of SPARQL queries defining the rules, modeled in RDF as described by the W3C member submission. You get the inferences when you click the inferencer button and the rules are run against your current ontology. You can subsequently "assert" the resultant inferences into your current ontology (thus materializing them). It does not perform backward chaining as that would mean that the SPARQL queries you write in the SPARQL editor would calculate the inferences only when the query is executed.(EDIT: see answer by @scotthenninger) You must first materialize them to your model as I described, in order for your SPARQL queries to be affected by the model changes.

Incorporating the SPIN reference API to an arbitrary SPARQL endpoint can be done so long as the developer of the endpoint meets the language criteria, Java, or has a way to interface Java into his endpoint's architecture and so long as he can abide by the AGPL license(UPDATE: now Apache V2.0 license).

@harschware has most of this right. A significant difference is that SPIN can use SPARQL Update, so INSERT and DELETE are possible. In this case, the SPIN rules will directly modify the data graphs.

@harschware correctly describes the behavior when executing SPIN in TopBraid Composer. In addition, the full SPIN engine, named TopSPIN, is available for Web services. This means one can execute a Web service call that executes TopSPIN to execute rules before submitting the query (or after, or however the SPARQLMotion pipeline is designed), and the results can be sent back in a variety of ways, including SPARQL Endpoint (XML or JSON). The SPARQLMotion pipeline can specify whether the SPIN inferences are discarded when the script is finished or materialized in some data graph. (Disclosure: I work for TopQuadrant)

Backward chaining is indeed supported when using Magic properties, which is another part of the SPIN submission.

If you have static data then you can pre-compute the inferences and either add (assert) them to the relevant query graph, or create union (MultiUnion) graphs that join the asserted and the inferred triples.

However, if you operate on changing data and want to compute inferences on the fly because there are too many possible inferences, it is often a better idea to exploit SPIN functions and magic properties. In a nutshell, you would "infer" the information as you need it, at query time, by calling SPIN functions that "live" on the server. For example assume you want to compute a rectangle's area (from width and height). You could either define this as a CONSTRUCT rule and infer the areas beforehand. Or you simply put the logic that computes the area into a SPIN function or magic property and ask intelligent queries, e.g.

SELECT ?rect ?area
WHERE {
    ?rect a ex:Rectangle .
    BIND (exspin:computeArea(?rect) AS ?area) .
}

The computeArea function above would do the necessary computations on demand and always use the latest data. You can also turn it into a magic property, to make your query more readable:

SELECT ?rect ?area
WHERE {
    ?rect a ex:Rectangle .
    ?rect exspin:area ?area .
}

And yes, TopSPIN uses a fixpoint iteration: it repeatedly runs all rules until no new triples have been inferred. In many cases just a single iteration is needed, e.g. with mapping tasks, and this can be configured in the TopSPIN engine. I am sure smarter engines could be implemented for subsets of SPIN that do things like rule chaining in RETE networks, but we haven't seriously looked into this (at TopQuadrant).