Archive

Archive for the ‘Linked Open Data’ Category

Data export VS Faceted expressivity

Bakfiets

Bakfiets by Anomalily on Flickr

One visiting the Netherlands will inevitably stumble upon some “BakFiets” in the streets. This Dutch speciality that seems to be the result from cross-breeding a pick-up with a bike can be used from many things from getting the kids around to moving a fridge.

Now, let’s consider a Dutch bike shop that sells some Bakfiets among other things . In his information system these item will surely be labelled as “Bakfiets” because this is just what they are. This information system can also be expected to be globally be filled with inputs and semantics (table names, fields names, …) in Dutch as well. If that bike shop wants to start selling his items outside of the Netherlands there will be a need for exporting the data into some international standard so that other sellers can re-import the data it into their own information system. This is where things get problematic…

What will happens to the “bakfiets” during the export? As it does not make sense to define an international level class “bakfiets” – which can be translated to “freight bicycle“, every shop item of type “bakfiets” will most certainly be exported as being a item of type “bike”. If the Dutch shop owner is lucky enough the standard may let him indicate that, no, this is not really just a two-wheels standard bike through a comment property. But even if the importer may be able to use that comment (which is not guaranteed), the information is lost: when going international, every “bakfiets” will become a regular bike. Even more worrying is the fact that besides the information loss there is no indication of how much of it is gone.

When the data is exported from one system and re-imported into another specificity is lost

When the data is exported from one system and re-imported into another specificity is lost

Semantic Web technologies can be of help here by enabling the qualification of shop items with facets rather than strict types. That is assigning labels or tags to things instead of putting items into boxes. The Dutch shop will be able to express in is knowledge system that his bikes with a box are both of the specific type “bakfiets” that makes sense only in the Netherlands and are also instances of the international type “bike”. An additional piece of information present in the knowledge base will connect the two types saying the the former is a specification of the later. The resulting information export flow is as follows:

  1. The Dutch shop assign all the box-bikes to the class “bakfiets” and the regular bikes to the class “bike”.
  2. A “reasoner” infers that because all the instances of “bakfiets” are specific types of “bike”, all these items are also of type “bike”.
  3. Another non Dutch shop asking for instances of “bike” in the Dutch shop will get a complete list of all the bikes and see that some of them are actually of type “bakfiets”.
  4. If his own knowledge system does not let him store facets the importers will have to flatten the data to one class but he will have received the complete information and know how much of it will be lost by removing facets.
The data shared has different facets out of which the data importer can make a choice

The data shared has different facets out of which the data importer can make a choice

Beyond this illustrative example data export presents real issues in many cases. Everyone usually want to express their data using the semantic that applies to them and have to force information into some other conceptualisation framework when this data is shared. A more detailed case for research data can be found in the following preprint article:

  • Christophe Guéret, Tamy Chambers, Linda Reijnhoudt, Frank van der Most, Andrea Scharnhorst, “Genericity versus expressivity – an exercise in semantic interoperable research information systems for Web Science”, arXiv preprint http://arxiv.org/abs/1304.5743, 2013

ICT 4 Development course final presentations

[object Window]

via ICT 4 Development course final presentations.

Pourquoi utiliser le Web de données?

Il y a quelque jours j’ai eu le plaisir, et la chance, de participer à la série de webinaires organisés par l’AIMS. L’objectif que je m’étais fixé pour ma présentation (en Français) intitulée “Clarifier le sens de vos données publiques avec le Web de données” était de démontrer l’avantage de l’utilisation du Web de données du point de vue du fournisseur de données, en passant par le consommateur. Faire une présentation sans aucun retour de la part de l’auditoire était une expérience intéressante que je renouvèlerait volontiers si une nouvelle occasion se présente. Surtout si c’est Imma et Christophe qui sont aux commandes! grâce à eux tout était parfaitement organisé et le wébinaire s’est déroulé sans problème 🙂

Si vous voulez voir si cette présentation atteint son but, les diapositives sont disponible sur Slideshare:

Une autre copie de cette présentation est disponible sur le compte SlideShare de l’AIMS.

Behind the scenes of a Linked Data tutorial

Last week, on the afternoon of November 22, I co-organized a tutorial about Linked Data aimed at researchers from digital humanities. The objective was to give a basic introduction to the core principles and to do that in a very hands-on setting, so that everyone can get a concrete experience with publishing Linked Data.

Everyone listening to Clement speaking about Linked Data and RDFa

To prepare this event, I teamed up with Clement Levallois (@seinecle) from the Erasmus University in Rotterdama. He is an historian of science with interests in network analysis, text processing and other compartments of the digital humanities. He had only heard of Linked Data and was eager to learn more about it. We started of by preparing together a presentation skeleton and the setup for the hands-on. During this he was shouting every time I was using a word he deemed too complex (“dereferencing”, “ontology”, “URI”, “reasoning”, …). In the end, “vocabulary” and “resource” are most probably the two most technical concepts that made it through. Then I took care of writing the slides, and he simplified them again before the tutorial. It is also him that presented them, I was just standing on the side all time.

The result: a researcher from digital humanities explaining to a full room of fellow researchers what Linked Data is and how it can be useful to them. Everyone was very interested & managed to annotate some HTML pages with RDFa, thereby creating a social network of foaf:knows relations among the individuals they described 🙂 We concluded the tutorial by plotting that network using a tool that Clement developed.

This was a very efficient and interesting collaboration! For those interested in what we did, all the material is available on dropbox and the presentation is on slideshare:

5-stars Linked Open Data pays more than Open Data

Let’s assume you are the owner of a CSV file with some valuable data. You derive some revenue from it by selling it to consumers that do traditional data integration. They take your file and import it into their own data storage solution (for instance, a relational database) and deploy applications on top of this data store.

Traditional data integration

Data integration is not easy and you’ve been told that Linked Open Data facilitates it so you want to publish your data as 5-star Linked Data. The problem is that the first star speaks about “Open license” (follow this link for an extensive description of the 5-star scheme) and that sounds orthogonal to the idea of making money with selling the data :-/

If you publish your CSV as-is, under an open license, you get 3-stars but don’t make money out of serving it. Trying to get 4 or 5 stars means more effort from you as a data publisher and will cost you some money, still without earning you back any…

Well, let’s look at this 4th star again. Going from 3 stars to 4 means publishing descriptions of the entities in the Web. All your data items get a Web page on their own with the structured data associated to them. For instance, if your dataset contains a list of cities with their associated population every of this city as its own URI with the population indicated in it. From that point, you get the 5th star by linking these pages to other pages published as Linked Open Data.

Roughly speaking, your CSV file is turned into a Web site and this is how you can make money out of it. Like for any website, visitors can look at individual pages and do whatever they want with them. They can not however dump the entire web site into their machine. Those interested in getting all the data can still buy it from you, either as a CSV or RDF dump.

Users of your data have the choice between two data usage process: use parts of the data through the Linked Open Data access or buy it all, and integrate it. They are free to choose the best solution for them depending on their needs and resources.

Using Linked Open Data

Some added side bonuses of going 5-star instead of sticking at 3:

  • Because part of the data is open for free, you can expect to get more users screening it and reporting back errors;
  • Other data publishers can easilly link their data set with yours by re-using the URIs of the data items. This increases the value of the data;
  • In its RDF format, it is possible to  add some links within the data set. Thereby doing part of the data integration work on the behalf of the data consumers – who will be grateful for it!
  • Users can deploy a variety of RDF-enabled tools to consume your data in various ways;

Sounds good, doesn’t it? So, why not publishing all your 3-star data as 5-star right away? 😉

Downscaling Entity Registries for Poorly-Connected Environments

VeriSign logo

VeriSign logo (Photo credit: Wikipedia)

Emerging online applications based on the Web of Objects or Linked Open Data typically assume that connectivity to data repositories and entity resolution services are always available. This may not be a valid assumption in many cases. Indeed, there are about 4.5 billion people in the world who have no or limited Internet access. Many data-driven applications may have a critical impact on the life of those people, but are inaccessible to those populations due to the architecture of today’s data registries.

Examples of data registries include the domain name registries. These are databases containing registered Internet domain names. They are necessary for all Web users wishing to visit a website knowing its URL (e.g. https://semweb4u.wordpress.com) rather than its IP address (e.g. http://76.74.254.120). Another example of data registry is the Digital Object Architecture (DOA) which assigns unique identifiers to digital objects (e.g. scientific publications).

Registries are critical components of today’s Internet architecture. They are widely used in every-day Web activities but their usage is severely impaired in poorly connected or ad-hoc environments. In this context, centralized data management – as typically used by current data registries – is of limited practicability, if only possible in the first place. There is a need for hybrid models mixing decentralized and hierarchical infrastructures to support data-driven application in environments with limited Internet connectivity.

Philippe Cudré-Mauroux and myself, received a $200,000 research grant from VeriSign Inc. (PDF version) to investigate such novel approaches for data registries. During this 12 months project, we will develop decentralized solutions to the problems of entity publication, search, de-duplication, storage and caching. A running prototype will be tested on the XO laptop, a laptop used by young learners in developing countries – most often in a mesh context with limited Internet connectivity.

Please don’t hesitate to contact us to ask for information about this project, we’d be happy to talk more about our plans 🙂

Decentralised Open Data at PMOD

%d bloggers like this: