Improving Language Modeling using Densely Connected Recurrent Neural Networks
2017-07-19WS 2017Unverified0· sign in to hype
Fréderic Godin, Joni Dambre, Wesley De Neve
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this paper, we introduce the novel concept of densely connected layers into recurrent neural networks. We evaluate our proposed architecture on the Penn Treebank language modeling task. We show that we can obtain similar perplexity scores with six times fewer parameters compared to a standard stacked 2-layer LSTM model trained with dropout (Zaremba et al. 2014). In contrast with the current usage of skip connections, we show that densely connecting only a few stacked layers with skip connections already yields significant perplexity reductions.