Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Issue title: Special Issue on Benchmarking Linked Data
Guest editors: Axel-Cyrille Ngonga Ngomo, Irini Fundulaki and Anastasia Krithara
Article type: Research Article
Authors: Usbeck, Ricardoa; * | Röder, Michaela | Hoffmann, Michaelc | Conrads, Felixa | Huthmann, Jonathanc | Ngonga-Ngomo, Axel-Cyrillea | Demmler, Christianc | Unger, Christinab
Affiliations: [a] DICE – Data Science Group, Paderborn University, Germany | [b] CITEC, University of Bielefeld, Germany | [c] AKSW Group, University of Leipzig, Germany
Correspondence: [*] Corresponding author. E-mail: [email protected].
Abstract: The necessity of making the Semantic Web more accessible for lay users, alongside the uptake of interactive systems and smart assistants for the Web, have spawned a new generation of RDF-based question answering systems. However, fair evaluation of these systems remains a challenge due to the different type of answers that they provide. Hence, repeating current published experiments or even benchmarking on the same datasets remains a complex and time-consuming task. We present a novel online benchmarking platform for question answering (QA) that relies on the FAIR principles to support the fine-grained evaluation of question answering systems. We detail how the platform addresses the fair benchmarking platform of question answering systems through the rewriting of URIs and URLs. In addition, we implement different evaluation metrics, measures, datasets and pre-implemented systems as well as methods to work with novel formats for interactive and non-interactive benchmarking of question answering systems. Our analysis of current frameworks shows that most of the current frameworks are tailored towards particular datasets and challenges but do not provide generic models. In addition, while most frameworks perform well in the annotation of entities and properties, the generation of SPARQL queries from annotated text remains a challenge.
Keywords: Factoid question answering, benchmarking, repeatable open research
DOI: 10.3233/SW-180312
Journal: Semantic Web, vol. 10, no. 2, pp. 293-304, 2019
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]