Motivation | Experiments | Timeline | Events | Participating | Results |
There are many existing ontology development tools, and they are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, users tend to use just one tool, as they are not able to interchange their ontologies from one tool to another.
The ideal scenario would be to interchange ontologies from one ontology development tool to another so as to benefit from the functionalities of these tools. However, this scenario is only possible if these tools are able to correctly interoperate.
Benchmarking is a process for obtaining a continuous improvement in a set of tools by systematically evaluating them and comparing their performance with that of the tools considered to be the best. This allows to extract the best practices used by the best tools and to obtain a superior performance in the tools.
This performance can be considered in terms of efficiency, scalability, interoperability, usability, etc.
The benchmarking to be performed will have two goals:
This will be carried out by performing interoperability experiments according to a common experimentation framework; then, their results will be collected, analysed and written in a public report, along with the best practices and tool improvement recommendations found.
The benchmarking will provide numerous benefits to the participants, to the Semantic Web community and to the industrial sector.
The benchmarking participants will:
The Semantic Web community and the industrial sector will:
In this benchmarking activity, we will assess the interoperability between two ontology development tools using RDF(S) files to interchange ontologies. To interchange ontologies from one ontology development tool into another, they must first be exported from the origin tool to a RDF(S) file and then this file must be imported into the destination tool. As any ontology exported by a tool is usually represented following the RDF/XML syntax for RDF, we will use this format for the interchange.
This scenario requires that the importers and exporters from/to RDF(S) of the ontology development tools work accurately to be able to interchange ontologies correctly. Therefore, the benchmarking is composed of the following stages:
Agreement stage | The quality of the benchmark suites that will be used is essential for the results of the benchmarking. Therefore, the first step is to agree on the definition of these benchmark suites. |
Evaluation stage 1 | Then, with the final versions of the benchmark suites, the RDF(S) importers and exporters of the ontology development tools will be evaluated. |
Evaluation stage 2 | Once the RDF(S) importers and exporters have been evaluated, there will be a second stage that will cover the evaluation of the ontology interchange between ontology development tools. |
The RDF(S) Import Benchmark Suite has been built taking into account the components of the knowledge model of RDF(S) commonly used by ontology development tools (classes, properties and instances) and the combinations that can be obtained with these components.
The common RDF(S) Export Benchmark Suite has been built taking into account the common components of the knowledge model of ontology development tools (classes, datatype properties, object properties, and instances) and the combinations that can be obtained with these components.
In this first stage, the participants should communicate to the organisers any comments and corrections elicited to obtain the definitive benchmark suites that will be used by all the tools.
In this stage, the participants will execute all the benchmarks in the final versions of the RDF(S) Import and Export benchmark suites and will check the correct working of the RDF(S) importer and exporter of their corresponding tools.
After this stage, the participants are expected to provide a document with:
As the goal of the benchmarking is to improve the tools, no problem appears if we correct any bugs they may have so to pass any tests that they did not pass. Nevertheless, participants should keep track of the failed benchmarks and the changes performed to correct the tool.
The results of the Evaluation stage 1 are available, which include the results of the execution of the RDF(S) Import Benchmark Suite and the RDF(S) Export Benchmark Suite in several tools.
In this stage, the participants will import in their tool the RDF(S) files exported by the other tools in the previous stage and will assess the ontology interchange between both tools.
To achieve this, they will use the RDF(S) Interoperability Benchmark Suite.
All the neccesary files for executing the RDF(S) Interoperability Benchmark Suite (Evaluation Stage 1 results, exported RDF(S) files, and result templates) can be downloaded in a single file.
After this stage, the participants are expected to provide a document with:
The timeline for the the benchmarking is the following:
August 26th | Agreement on the RDF(S) import and export benchmark suites |
September 30th | First results of the RDF(S) import and export experiments |
October 10th-11th | Interoperability Working Days in Madrid. |
October 28th | Web page updated with the final versions of the benchmark suites |
November 18th | Results of the RDF(S) import and export experiments with the updated benchmark suites |
November 25th | Web page updated with the results from all the tools of the RDF(S) import and export experiments |
December 23rd | Results of the ontology interchange experiments |
Every organisation is welcome to participate in benchmarking the interoperability of ontology development tools. If you are developers of an ontology development tool, you can participate with your own tool. If you are users of ontology development tools, you can participate with your preferred tool.
Organisations participating in the benchmarking are expected to participate in the development of the benchmark suites and in the experimentation related to their tool.
If you want to participate in the benchmarking or have some further question about it, please contact Raúl García Castro in the following email: .
Current benchmarking results are available in the Knowledge Web deliverable D1.2.2.1.1 Benchmarking the interoperability of ontology development tools using RDF(S) as interchange language:
This benchmarking activity is supported by the Knowledge Web Network of Excellence. |