There are plenty of papers in the academic literature - the link is a Google Scholar search for rdf ontology mapping and yields 22,0000 results of which at least the first 10 pages are all relevant - but has anyone turned this into a real world service?
What I would like to have is a service where I give it an vocabulary/ontology and it suggests a mapping to other vocabularies/ontologies.
This does not even have to be a full mapping service it could just be a psuedonym service e.g. given rdfs:label it might return skos:prefLabel, skos:altLabel etc.
Are there any such services already available on the Semantic Web or is anyone you know of developing such a service?
INRIA has developed an antology alignment server which can be used to produce an alignment from 2 input ontologies, among other things. The alignment server exists since 2007 and is distributed with the Alignment API. The Alignment API is well maintained and often updated. Anyone can install an alignment server relatively easily, following the documentation provided with the API. However, the API does not come with a human interface, which would have to be built on top of it. However, if you need an interface, you can use the NeOn toolkit, which provides a plugin for working with the Alignment API and the Alignment Server. There's even a video about it. In principle, the alignment server is supposed to take advantage of a disributed architecture where many servers would exist throughout the world and interact with each others to avoid re-computing the same correspondences over and over again. Unfortunately, it is not very well advertised and few people or organisations are deploying it. You can read about the Alignment API in this journal article.
The VMF (Vocabulary Mapping Framework) made some efforts in 2009, but concentrates on vocabularies in the Library Domain (RDA, CRM, marc21, FRBR, ONIX, etc.). A Ontology was published that contained these elements incl. its mappings. I could not figure out if someone has used this for a real world service, but it would be useful.
To lookup mapping candidates (T-Box) I still consult http://pingthesemanticweb.com/stats/namespaces.php or http://prefix.cc/ , let me know if there is something better than that.
Which has see also links to other related vocabularies. In the case of WikiData, you would express the entire vocabulary in RDF and provide links from terms in the above vocabulary to terms in other vocabularies as see also's.
I see three attacks on the problem, which of course may be combined.
(1) "The Hitchhikers Guide To The Semantic Web"
I use prefix.cc all the time because it's hard to keep all those namespaces in my head and I want a place to cut and paste them from so I won't make little typos that break my code.
Now, I can certainly imagine something smarter that loads a large number of OWL and RDFS schemas from all over the place and provides some kind of browsing interface or inference over the mass. LOV (mentioned above) is a step in this direction, but I think it's possible to do better.
(2) Attack from above (upper and middle ontologies)
Now, if we had some way of specifying what predicates and other terms "mean" in machine readable form, we could do a lot better. In general the problem is hard, but I think progress can be made.
I've been asked about publishing LSCOM annotations about photos from Ookaboo so I did some thinking about how to connect LSCOM concepts to Freebase concepts. Usually an LSCOM concept is "almost" equivalent to a Freebase concept but (1) it only applies to a visual image and (2) there's usually some restriction. LSCOM 056 (Exterior shots of a school for children, not a college or a university) is close to fbase:en.school, but has the restriction that it has to be an exterior shot and it can't be a university) I'd imagine that a small "bridge" ontology could be made that specifies the differences between LSCOM and Freebase concepts.
Upper ontology has a reputation of being a bit like dianetics or antigravity, but I see so much agony about vocabulary matching in 2012 that it might be time to give it another look.
(3) Attack from below
Here we have some really big knowledgebase like Sindice and we try to mine large amounts of A-Box data for what it can tell us about the T-Box. Machine learning methods could be used to, say, propose the equivalence or near-equivalence of concepts and predicates.
Perhaps the best answer is a type (1) system informed by (2) and (3).