SOTAVerified

Fast-Slow Recurrent Neural Networks

2017-05-24NeurIPS 2017Code Available0· sign in to hype

Asier Mujika, Florian Meier, Angelika Steger

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
enwik8Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
Hutter PrizeLarge FS-LSTM-4Bit per Character (BPC)1.25Unverified
Hutter PrizeFS-LSTM-4Bit per Character (BPC)1.28Unverified
Penn Treebank (Character Level)FS-LSTM-4Bit per Character (BPC)1.19Unverified
Penn Treebank (Character Level)FS-LSTM-2Bit per Character (BPC)1.19Unverified

Reproductions