SOTAVerified

Improving Neural Language Models with a Continuous Cache

2016-12-13Code Available0· sign in to hype

Edouard Grave, Armand Joulin, Nicolas Usunier

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WikiText-103Neural cache model (size = 2,000)Test perplexity40.8Unverified
WikiText-103Neural cache model (size = 100)Test perplexity44.8Unverified
WikiText-2Grave et al. (2016) - LSTM + continuous cache pointerTest perplexity68.9Unverified
WikiText-2Grave et al. (2016) - LSTMTest perplexity99.3Unverified

Reproductions