SOTAVerified

A Quantitative Evaluation of Natural Language Question Interpretation for Question Answering Systems

2018-09-20Unverified0· sign in to hype

Takuto Asakura, Jin-Dong Kim, Yasunori Yamamoto, Yuka Tateisi, Toshihisa Takagi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Systematic benchmark evaluation plays an important role in the process of improving technologies for Question Answering (QA) systems. While currently there are a number of existing evaluation methods for natural language (NL) QA systems, most of them consider only the final answers, limiting their utility within a black box style evaluation. Herein, we propose a subdivided evaluation approach to enable finer-grained evaluation of QA systems, and present an evaluation tool which targets the NL question (NLQ) interpretation step, an initial step of a QA pipeline. The results of experiments using two public benchmark datasets suggest that we can get a deeper insight about the performance of a QA system using the proposed approach, which should provide a better guidance for improving the systems, than using black box style approaches.

Tasks

Reproductions