SOTAVerified

Dynamic Evaluation of Neural Sequence Models

2017-09-21ICML 2018Code Available0· sign in to hype

Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Hutter PrizemLSTM + dynamic evalBit per Character (BPC)1.08Unverified
Penn Treebank (Word Level)AWD-LSTM + dynamic evalTest perplexity51.1Unverified
Text8mLSTM + dynamic evalBit per Character (BPC)1.19Unverified
WikiText-2AWD-LSTM + dynamic evalTest perplexity44.3Unverified

Reproductions