What is the semantic web (or web 3.0)?
The Semantic Web is a web of data. There is lots of data we all use every day, and it is not part of the web. I can see my bank statements on the web, and my photographs, and I can see my appointments in a calendar. But can I see my photos in a calendar to see what I was doing when I took them? Can I see bank statement lines in a calendar?
Why not? Because we don’t have a web of data. Because data is controlled by applications, and each application keeps it to itself.
The Semantic Web is about two things. It is about common formats for integration and combination of data drawn from diverse sources, where on the original Web mainly concentrated on the interchange of documents. It is also about language for recording how the data relates to real world objects. That allows a person, or a machine, to start off in one database, and then move through an unending set of databases which are connected not by wires but by being about the same thing.
Wikipedia describes web 3.0 – the semantic web as:
Web 3.0 is one of the terms used to describe the evolutionary stage of the Web that follows Web 2.0. Given that technical and social possibilities identified in this latter term are yet to be fully realized the nature of defining Web 3.0 is highly speculative. In general it refers to aspects of the Internet which, though potentially possible, are not technically or practically feasible at this time.
Is this the end of Google (or maybe the start of the end…)?
Wikipedia 3.0: The End of Google?
The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.
This quote from an article (blog) from evolvingtrends has some interesting points. If (at last) we get to the point where web pages are ‘organised’ or structured in such a way that a form of natural search language can evolve (aka SQL, etc.) then the web will become self describing. The ‘structure’ of the web can actually be its own database[engine].
Once we evolve (naturally…?) the constructs within web pages to semantically describe the data that is embedded into the page, should we not have a perfect solution for searching and selecting data? Imagine if you will pages with pseudo XML markup that conforms to pre-defined schemas (XML I hear you shouting) that enables any web page to embed the data in such a way that a natural search engine can find the data; not only find it but find it with some degree of accuracy.
Instead of flapping around trying to parse data (text at the moment) from long winded html web pages, the constructs of the web pages will be a sort of self describing database that conforms to common standards. For example you would be able to search all web pages for cars that have certain characteristics and requirements, throughout the web, and actually GET good results. No longer the wishy washy results returned by the ‘best guess’ methods of the search engines; now information that conforms exactly to our search criteria – eutopia?
Watch this space (and of course millions of other ‘spaces’) as we get closer to having a useful ”search enabled’ web…
via W3C Semantic Web Activity.