Re-using and combining multiple ontologies on the Web is bound to lead to inconsistencies between the combined vocabularies. Even many of the ontologies that are in use today turn out to be inconsistent once some of their implicit knowledge is made explicit. That appeals for efficient and robust approaches to deal with inconsistencies in the Semantic Web. Various frameworks of processsing inconsistent ontologies have been proposed, which range from various methods for reasoning with inconsistent ontologies to various approaches for debugging inconsistent ontologies. In order to enable a user or a system developer to decide which method is best suited for his/her task, we need a comprehensive evaluation and benchmarking on those proposed approaches. This deliverable investigates methods and results for benchmarking of processing inconsistent ontologies. First we present a methodology study of the benchmarking of processing inconsistent ontology. We develop a gold standard specification language for automatic/semi-automatic evaluation of processing inconsistent ontologies. We have implemented a benchmarking suite for processing inconsistent ontologies. In this document we provide a detailed manual how to use the benchmarking suite. We have performed a series of experiments of benchmarking with realistic and large scale inconsistent ontologies. In this document, we report a comprehensive evaluation of various methods of processing inconsistent ontologies, which include a) syntactic approaches versus semantic approaches, b) linear extension versus multi-step extension, c) blind backtracking versus informed backtracking, and d) reasoning with inconsistent ontologies versus debugging inconsistent ontologies. Those methods are evaluated with respect to the three benchmarking factors: quality of query answers, performance, and scalability. We finally discuss the results and draw the conclusions about the future of processing inconsistent ontologies.