I have blogged about the exciting potential of DBPedia before ( dbchem” href=”http://wwmm.ch.cam.ac.uk/blogs//?p=316″>dbpedia – structured information from Wikipedia => dbchem). It is a semistructured RDF triple collection created automatically from Wikipedia. The really exciting thing is that huge numbers of WPedians have contributed to DBPEdia without even knowing it. Simply by evolving simple community metadata (tagging and infoboxes) the WPedians have created a top-class semantic resource. A WP category of, say, “1997 deaths” gets translated to a triple something like:
iana :death_date “1995″^^xsd:date
which says that the object with label “Diana” had a “deathDate” category with value “1995″ which is is of type date.
Now the OKFN has blogged
DBpedia recently released the new version of their dataset. The project aims to extract structured information from Wikipedia so that this can be queried like a database. On their blog they say:
The renewed DBpedia dataset describes 1,950,000 “things”, including at least 80,000 persons, 70,000 places, 35,000 music albums, 12,000 films. It contains 657,000 links to images, 1,600,000 links to relevant external web pages and 440,000 external links into other RDF datasets. Altogether, the DBpedia dataset now consists of around 103 million RDF triples.As well as improving the quality of the data, the new release includes coordinates for geographical locations and a new classificatory schema based on Wordnet synonym sets. It is also extensively linked with many other open datasets, including: “Geonames, Musicbrainz, WordNet, World Factbook, EuroStat, Book Mashup, DBLP Bibliography and Project Gutenberg datasets”. This is probably one of the largest open data projects currently out there – and it looks like they have done an excellent job at integrating structured data from Wikipedia with data from other sources. (For more on this see the W3C SWEO Linking Open Data project – which exists precisely in order to link more or less open datasets together.)