SOTAVerified

Simple Recurrent Units for Highly Parallelizable Recurrence

2017-09-08EMNLP 2018Code Available2· sign in to hype

Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WMT2014 English-GermanTransformer + SRUBLEU score28.4Unverified

Reproductions