SOTAVerified

Recurrent Neural Network Regularization

2014-09-08Code Available0· sign in to hype

Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn Treebank (Word Level)Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
Penn Treebank (Word Level)Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified

Reproductions