This summer I was reading the book "Semantics" by Geoffrey Leech, which was written in 1974. I found it quite remarkable this his model of semantics in language was subject-predicate-object triples. I also found a 1968 paper by Ross Quinlan titled "Semantic memory" in which he was creating sets of triples that define relationships between a small number of words and doing an analysis of the graph. The biggest difference with what I'm doing today is that he had a thousand triples and I have a billion.
It's clear to me that the triple has a history before RDF. My question is, how far back does that history go?
It's clear to me that the triple has a history before RDF. My question is, how far back does that history go?
Reminds me of a favourite quote of mine:
"If you wish to make an apple pie from scratch, you must first invent the universe." – Carl Sagan
...so there's an argument to be made that the answer is 13.75 ± 0.13 billion years.
A little bit more recently: Entity–Attribute–Value (EAV) models are around since at least the seventies
(...here, E ↔ S. A ↔ P. V ↔ O).
Wikipedia gives a reference for an EAV based system from '72:
Warner, H. R.; Olmsted, C. M.; Rutherford, B. D. (1972), "HELP—a program for medical decision-making", Comput Biomed Res 5
...but I think they might go back further to the origins of LISP ('58).
...but again, I think you'll probably find something similar before that... like directed labelled graphs, and so forth.
It's giants standing on shoulders all the way down.
A triple, that's stating that two things being related... would probably make a good patent. ;-)
SCNR