SOTAVerified

Sequence Level Training with Recurrent Neural Networks

2015-11-20Code Available0· sign in to hype

Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
IWSLT2015 German-EnglishWord-level LSTM w/attnBLEU score20.2Unverified

Reproductions