SOTAVerified

Attentive Language Models

2017-11-01IJCNLP 2017Unverified0· sign in to hype

Giancarlo Salton, Robert Ross, John Kelleher

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we extend Recurrent Neural Network Language Models (RNN-LMs) with an attention mechanism. We show that an ``attentive'' RNN-LM (with 11M parameters) achieves a better perplexity than larger RNN-LMs (with 66M parameters) and achieves performance comparable to an ensemble of 10 similar sized RNN-LMs. We also show that an ``attentive'' RNN-LM needs less contextual information to achieve similar results to the state-of-the-art on the wikitext2 dataset.

Tasks

Reproductions