SOTAVerified

Reinforced Mnemonic Reader for Machine Reading Comprehension

2017-05-08Code Available0· sign in to hype

Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, Ming Zhou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SQuAD1.1Reinforced Mnemonic Reader (ensemble model)EM82.28Unverified
SQuAD1.1Reinforced Mnemonic Reader (single model)EM79.55Unverified
SQuAD1.1Mnemonic Reader (ensemble)EM74.27Unverified
SQuAD1.1Mnemonic Reader (single model)EM71Unverified
SQuAD1.1 devR.M-Reader (single)EM78.9Unverified
TriviaQAMnemonic ReaderEM46.94Unverified

Reproductions