SOTAVerified

Contextualized Word Representations for Reading Comprehension

2017-12-10NAACL 2018Code Available0· sign in to hype

Shimi Salant, Jonathan Berant

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Reading a document and extracting an answer to a question about its content has attracted substantial attention recently. While most work has focused on the interaction between the question and the document, in this work we evaluate the importance of context when the question and document are processed independently. We take a standard neural architecture for this task, and show that by providing rich contextualized word representations from a large pre-trained language model as well as allowing the model to choose between context-dependent and context-independent word representations, we can obtain dramatic improvements and reach performance comparable to state-of-the-art on the competitive SQuAD dataset.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SQuAD1.1RaSoR + TR + LM (single model)EM77.58Unverified
SQuAD1.1RaSoR + TR (single model)EM75.79Unverified

Reproductions