How are
- the principles of Linked Data as data publishing guide (independent of Semantic Web technology) and
- the Semantic Web as common, standardized technology stack for machine-processable knowledge representation and management in the Web
related to the
- the principles of REST as an architectural design guideline for distributed hypermedia systems (see also this presentation from Roy T. Fielding that introduces a 5th uniform interface constraint explicitly)?
Let's take the constraints of the REST architectural style as figured out here:
- Resource Identification
- Uniform Interface
- Self-Describing Messages
- Hypermedia Driving Application State
- Stateless Interactions (see here for a good explanation what stateless in this context mean)
Resource Identification is clearly address in point 1 (URIs) and 2 (HTTP URIs) of the Linked Data principles as defined by timbl. However, the explicit suggestion of the use of HTTP URIs is against the REST feature of a uniform generic interface between components ("A REST API should not be dependent on any single communication protocol", see here). Identification is separated from interaction.
Is the common, layered Semantic Web technology stack a implementation of a Uniform Interface re. REST principles? Or is it only HTTP as communication protocol? And what does "The same small set of operations applies to everything" then mean? Do I have to enable an processing of every operation on every (information) resource? Or does this mean, that I only have to provide a uniform behaviour of processing of operations on (information) resource, e.g., if a specific operation is not possible or allow on a specific resource, then the component has to communicate this in a uniform way?
[edit]
A verification re. the issue of how the small set of operations of the Uniform Interface have to be support (on the implementation example of HTTP):
HTTP operations are generic: they are allowed or not, per resource, but they are always valid. (see here)
This is in accord with my last statement.
As the common, layered Semantic Web technology stack uses HTTP as communication protocol, it uniformly defines/provides the small set of operations of the Uniform Interface, too. However, the media types define processing models ("Every media type defines a default processing model.", see here). Thereby, layered encoding is possible (see here), e.g., "application/rdf+turtle": RDF Model as knowledge representation structure (data model) and Turtle as syntax (other knowledge representation languages, e.g., RDF Schema are provided "in-band", via namespace references). Furthermore,
The media type identifies a specification that defines how a representation is to be processed. (see here)
Side note: I know, there is some progress in providing media type specification as resources with a URI. However, as far as I know, their resource URIs lack of a good machine-processable, dereferencable specification description, e.g., the lack of a machine-processable HTML specification that enables a machine agent to know that "the anchor elements with an href attribute create a hypertext link that, when selected, invokes a retrieval request (GET)" (this issue is derived from community statement and is not really verified, however, I currently would agree with it ;) ; please correct me, if this assertion is wrong). All in all, an agent must be able to automatically learn the processing model of a previously unknown media type, if wanted (analogues the HTTP Upgrade header field). I know, that there is some progress (discussion) in the TAG community re. a better introduction of new media types.
To summarize, the important aspect is that the media type specifications and the knowledge representation language specifications in general also have to the define the processing model of specific link types (e.g. href in HTML) in a machine-processable way (is this currently really the case? - I would say no!). This is addressed by the constraints "Self-Describing Messages" and "Hypermedia Driving Application State" (a.k.a. HATEOAS).
In other words, I would (currently) conclude that only the methods of the HTTP protocol are an implementation of a set of opertations of a Uniform Interface and Semantic Web knowledge representation languages are related to the other two constraints.
[\edit]
Self-Describing Messages are enforced for machine processing by using as basis the common knowledge representation languages of the Semantic Web (i.e. RDF Model, RDF Schema, OWL, RIF) and all knowledge representation languages (incl. further Semantic Web ontologies) are referenced in this 'message'. This is somehow generalized in the third Linked Data principls as defined by timbl ("provide useful information, using the standards").
The forth Linked Data principle as defined by timbl ("Include links to other URIs
") is somehow related to Hypermedia Driven Application State of the REST principles. This principle can again be powered for better machine processing by using the common knowledge representation languages of the Semantic Web as basis. However, I'm a bit unclear how the links drive my application state. Nevertheless, I guess that the application state would change when navigating to a resource by dereferencing a link (HTTP URI).
[edit]This is explained in the introduction section of Principed Design of the Modern Web Architecture:
The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages forms a virtual state machine allowing a user to progress through the application by selecting a link or submitting a short data-entry form, with each action resulting in a transition to the next state of the application by transferring a representation of that state to the user.
[/edit]
Stateless Interaction is not really covered by the Linked Data principles as defined by timbl, or? Albeit, when realizing "state as a resource" (cf. here), I can use again the common knowledge representation languages of the Semantic Web as basis for describing states and using HTTP URIs to make these resources also accessible.
Would you agree with (parts of) my interpretation?
Finally, are the principles of Linked Data really only intended to be read-only. I though read and write would better fit to the principles of REST, or?
Source, where this topic is also discussed somehow:
- Parallel discussion on the rest-discuss mailing list
- Linked Data for RESTafarians
- Linked Data and REST architectural style
- RESTful Design Patterns, httpRange-14 & Linked Data
- Principled Design of the Modern Web Architecture
- The proceedings of the Second International Workshop on RESTful Design (WSREST2011) contain a paper with the title "REST and Linked Data: a match made for domain driven development?" (p. 22 ff)
- "Why SPARQL endpoints aren't even remotely RESTful" (another related discussion on the rest-discuss mailing list)