OWL interoperability benchmarking

Benchmarking the interoperability of Semantic Web technology using OWL as interchange language

Motivation Benefits Experiments Timeline Participation

Motivation

The technology that supports the Semantic Web presents a great diversity and, while all these tools use different kinds of ontologies, not all share a common knowledge representation model, which causes a problem when these tools try to interoperate.

OWL is the language recommended by the World Wide Web Consortium for defining ontologies and it currently seems the right choice to use as a language for interchanging them. But current interoperability between Semantic Web tools using OWL is unknown, and evaluating up to what extent one tool is able of interchanging ontologies with others is quite difficult as there are no means available for easily performing it.

An ideal scenario would be one in which tools interchange ontologies with a minimal loss or addition of knowledge. However, the interoperability of current tools is far from such a scenario. A solution to improve the interoperability is to perform a benchmarking of the tools.

Benchmarking is a process for obtaining a continuous improvement in a set of tools by systematically evaluating them and comparing their performance with that of the tools considered to be the best. This allows to extract the best practices used by the best tools and to obtain a superior performance in the tools.

The goals of the benchmarking are:

Previously, we performed the RDF(S) interoperability benchmarking where we assessed the interoperability of tools using RDF(S) as interchange language. This time we consider OWL as interchange language instead of RDF(S) and we aim for a fully automatic execution of the experiments.

The benchmarking will be carried out by performing interoperability experiments according to a common experimentation framework; then, their results will be collected, analysed and written in a public report, along with the best practices and tool improvement recommendations found.

Benefits

The benefits of the interoperability benchmarking concern both the developers and the users of Semantic Web technology, as well as the Semantic Web community and the industrial sector.

Benefits for developers and users

Benefits for the Semantic Web community and the industrial sector

Experiments

The experiment to be performed consists on measuring the interoperability of the tools participating in the benchmarking by interchanging ontologies from one tool to another. From these measurements, we will extract the current interoperability between the tools, the causes of problems, and improvement recommendations.

In this benchmarking activity we consider interoperability between tools using an interchange language. To interchange ontologies from one tool to another, they must first be exported from the origin tool to a file, which must then be imported into the destination tool. As any ontology exported by a tool is usually represented in the RDF/XML syntax, we will use this format for the interchange.

The execution of the experiments will be fully automatic. To that end, the IBSE tool has been developed. To be able to automatically perform the experiments, for each tool a method must be implemented as shown in the IBSE web page.

The ontologies that will be interchanged between all the tools are those of the OWL Import Benchmark Suite.

Timeline

The timeline for the benchmarking is the following:

30th June 2007 Implementation of the interfaces for the tools
15th July 2007 Execution of the experiments
20th August 2007 Analysis of the results

Participation

Every organisation is welcome to participate in OWL interoperability benchmarking:

Organizations participating in the benchmarking are expected to implement the required IBSE method for their tool (an easy task) and to analyse the results of their tool.


This benchmarking activity
is supported by the
Knowledge Web
Network of Excellence.
KWeb logo

For any comment, suggestion, question, or if you want to participate in the benckmarking, please write an email to: .