SOTAVerified

On the Summarization of Consumer Health Questions

2019-07-01ACL 2019Code Available0· sign in to hype

Asma Ben Abacha, Dina Demner-Fushman

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16\%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization.

Tasks

Reproductions