SOTAVerified

MarCQAp: Effective Context Modeling for Conversational Question Answering

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

State-of-the-art models for Document-grounded Conversational Question Answering (DCQA) are based on the Transformer architecture. This raises two open issues: (a) Is it sufficient to concatenate the dialog history and the grounding document and perform cross-attention via a Transformer in order to capture the document/dialogue relationships? and (b) What is the best way to cope with the Transformers’ quadratic complexity, given the long inputs in DCQA? We address these issues in two dimensions. First, we introduce MarCQAp, a new modeling approach which encodes the historic answers by adding textual markups in the grounding document text, and then answers the question conditioned on the marked document. Second, we show that sparse self-attention architectures, such as the Longformer, can replace the Transformer, resolving the input length limitation. Our results demonstrate the effectiveness of each approach and their combination for explicit representation of dialogue/document relationships, significantly improving over state-of-the-art DCQA models.

Tasks

Reproductions