Identifying Where to Focus in Reading Comprehension for Neural Question Generation
Xinya Du, Claire Cardie
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
A first step in the task of automatically generating questions for testing reading comprehension is to identify question-worthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven --- with no sophisticated NLP pipelines or any hand-crafted rules/features --- and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves state-of-the-art performance for paragraph-level question generation for reading comprehension.