The relation of Linked Data/Semantic Web to REST

How are

related to the

Let's take the constraints of the REST architectural style as figured out here:

  • Resource Identification
  • Uniform Interface
  • Self-Describing Messages
  • Hypermedia Driving Application State
  • Stateless Interactions (see here for a good explanation what stateless in this context mean)

Resource Identification is clearly address in point 1 (URIs) and 2 (HTTP URIs) of the Linked Data principles as defined by timbl. However, the explicit suggestion of the use of HTTP URIs is against the REST feature of a uniform generic interface between components ("A REST API should not be dependent on any single communication protocol", see here). Identification is separated from interaction.

Is the common, layered Semantic Web technology stack a implementation of a Uniform Interface re. REST principles? Or is it only HTTP as communication protocol? And what does "The same small set of operations applies to everything" then mean? Do I have to enable an processing of every operation on every (information) resource? Or does this mean, that I only have to provide a uniform behaviour of processing of operations on (information) resource, e.g., if a specific operation is not possible or allow on a specific resource, then the component has to communicate this in a uniform way?
[edit] A verification re. the issue of how the small set of operations of the Uniform Interface have to be support (on the implementation example of HTTP):

HTTP operations are generic: they are allowed or not, per resource, but they are always valid. (see here)

This is in accord with my last statement.

As the common, layered Semantic Web technology stack uses HTTP as communication protocol, it uniformly defines/provides the small set of operations of the Uniform Interface, too. However, the media types define processing models ("Every media type defines a default processing model.", see here). Thereby, layered encoding is possible (see here), e.g., "application/rdf+turtle": RDF Model as knowledge representation structure (data model) and Turtle as syntax (other knowledge representation languages, e.g., RDF Schema are provided "in-band", via namespace references). Furthermore,

The media type identifies a specification that defines how a representation is to be processed. (see here)

Side note: I know, there is some progress in providing media type specification as resources with a URI. However, as far as I know, their resource URIs lack of a good machine-processable, dereferencable specification description, e.g., the lack of a machine-processable HTML specification that enables a machine agent to know that "the anchor elements with an href attribute create a hypertext link that, when selected, invokes a retrieval request (GET)" (this issue is derived from community statement and is not really verified, however, I currently would agree with it ;) ; please correct me, if this assertion is wrong). All in all, an agent must be able to automatically learn the processing model of a previously unknown media type, if wanted (analogues the HTTP Upgrade header field). I know, that there is some progress (discussion) in the TAG community re. a better introduction of new media types.

To summarize, the important aspect is that the media type specifications and the knowledge representation language specifications in general also have to the define the processing model of specific link types (e.g. href in HTML) in a machine-processable way (is this currently really the case? - I would say no!). This is addressed by the constraints "Self-Describing Messages" and "Hypermedia Driving Application State" (a.k.a. HATEOAS).
In other words, I would (currently) conclude that only the methods of the HTTP protocol are an implementation of a set of opertations of a Uniform Interface and Semantic Web knowledge representation languages are related to the other two constraints. [\edit]

Self-Describing Messages are enforced for machine processing by using as basis the common knowledge representation languages of the Semantic Web (i.e. RDF Model, RDF Schema, OWL, RIF) and all knowledge representation languages (incl. further Semantic Web ontologies) are referenced in this 'message'. This is somehow generalized in the third Linked Data principls as defined by timbl ("provide useful information, using the standards").

The forth Linked Data principle as defined by timbl ("Include links to other URIs ") is somehow related to Hypermedia Driven Application State of the REST principles. This principle can again be powered for better machine processing by using the common knowledge representation languages of the Semantic Web as basis. However, I'm a bit unclear how the links drive my application state. Nevertheless, I guess that the application state would change when navigating to a resource by dereferencing a link (HTTP URI).
[edit]This is explained in the introduction section of Principed Design of the Modern Web Architecture:

The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages forms a virtual state machine allowing a user to progress through the application by selecting a link or submitting a short data-entry form, with each action resulting in a transition to the next state of the application by transferring a representation of that state to the user.

[/edit]

Stateless Interaction is not really covered by the Linked Data principles as defined by timbl, or? Albeit, when realizing "state as a resource" (cf. here), I can use again the common knowledge representation languages of the Semantic Web as basis for describing states and using HTTP URIs to make these resources also accessible.

Would you agree with (parts of) my interpretation?
Finally, are the principles of Linked Data really only intended to be read-only. I though read and write would better fit to the principles of REST, or?

Source, where this topic is also discussed somehow:

Two points for you to consider:

  1. Linked Data tends to be big datasets collected/curated by some institution that then publishes them for others to use. The organisation publishing the data will often be using it for their own internal uses so in a lot of cases allowing write access to arbitrary users is not an option since it would affect their own use of the data (though not always)
  2. The technologies for writing Semantic Web (and thus Linked Data) via REST are only just becoming standardised i.e. SPARQL Update and the SPARQL Uniform HTTP Protocol for RDF Graph Management. These are still undergoing the standardisation process which will be completed later this year most likely.
    Yes writing data can be done on the Semantic Web but the technologies for it aren't as widely available as those for just reading the data

Concerning read-only - for Linked Data currently found in the wild, yes, this is the case, but people are working on solutions for a write-enabled Linked Data Web. Rob already mentioned SPARQL as one of the key components, for the big picture see Realizing a write-enabled Web of Data. In fact, we're now launching the WebID Incubator Group at W3C, another pivotal piece in the overall puzzle re a full-fledged write-enabled Web of Linked Data.

Here is another approach, which differs from the mainstream REST vs RDF interpretation.

What if we have a domain model, relying on RDF, split it into domain objects and expose subset of triples, describing each object via REST interface? And allow writing /updating triples back via REST interface ? And also allowing to generate new triples, by launching calculations, that create new REST resources and new triples.

And consider using SPARQL on the client side of REST :) The implementation of the server side is not restricted to be a SPARQL endpoint, or even implemented as triple storage. It just needs to offer RDF serialization of resources, instead/or in addition to XML, microformats and HTML.

Then we can use conventional technologies for implementing read/write REST services, but expose objects via their RDF representation, and link to the big LOD cloud, when necessary. If the LOD cloud finds its useful at certain point, it is trivial to retrieve the RDF representations of these REST resources, in a similar way search engines gather their information, rather than compiling the LOD cloud manually.

That's what we have been doing at http://opentox.org, trying to develop REST - RDF web services platform for predictive toxicology with REST / RDF OpenTox API

This has been also presented at last year ACS RDF Symposium - RESTful RDF services for predictive toxicology.

Unfortunately there is also a problem with large queries that has led to the SPARQL standard advocating the use of a POST where you should be using a GET.

I think Data Wikis are exciting, but I think many Linked Data publishers have a distinct point-of-view, and that's a good thing.

Have you ever heard of a game called "Association Football?"

http://en.wikipedia.org/wiki/Association_football

You know the game, but you probably don't know that name. I like watching the World Cup and I've coached soccer for kids but until I became a semantician I never knew that, according to Wikipedia and Freebase, the game that many people call soccer or football (or some derivative of football) is officially called "Association Football."

Some political decision was made to call it that on Wikipedia, and I guess that's how it is, but if I was making a service aimed at ordinary people, I think I'd use a different name for that sport. I could certainly use the Freebase API to change that name, but it would get changed back. Organizations that want to maintain a stable POV won't want a write interface.