SOTAVerified

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

2018-03-04Code Available1· sign in to hype

Shaojie Bai, J. Zico Kolter, Vladlen Koltun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN .

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn Treebank (Character Level)Temporal Convolutional NetworkBit per Character (BPC)1.31Unverified
Penn Treebank (Word Level)LSTM (Bai et al., 2018)Test perplexity78.93Unverified
Penn Treebank (Word Level)GRU (Bai et al., 2018)Test perplexity92.48Unverified
WikiText-103TCNTest perplexity45.19Unverified

Reproductions