SOTAVerified

Improved Language Modeling by Decoding the Past

2018-08-14ACL 2019Unverified0· sign in to hype

Siddhartha Brahma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn Treebank (Character Level)Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.Bit per Character (BPC)1.17Unverified
Penn Treebank (Word Level)Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.Test perplexity47.3Unverified
WikiText-2Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.Test perplexity40.3Unverified

Reproductions