Interactive Instance-based Evaluation of Knowledge Base Question Answering
Daniil Sorokin, Iryna Gurevych
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/UKPLab/emnlp2018-question-answering-interfaceOfficialIn papernone★ 0
Abstract
Most approaches to Knowledge Base Question Answering are based on semantic parsing. In this paper, we present a tool that aids in debugging of question answering systems that construct a structured semantic representation for the input question. Previous work has largely focused on building question answering interfaces or evaluation frameworks that unify multiple data sets. The primary objective of our system is to enable interactive debugging of model predictions on individual instances (questions) and to simplify manual error analysis. Our interactive interface helps researchers to understand the shortcomings of a particular model, qualitatively analyze the complete pipeline and compare different models. A set of sit-by sessions was used to validate our interface design.