SOTAVerified

Sliced Recurrent Neural Networks

2018-07-06COLING 2018Code Available0· sign in to hype

Zeping Yu, Gongshen Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recurrent neural networks have achieved great success in many NLP tasks. However, they have difficulty in parallelization because of the recurrent structure, so it takes much time to train RNNs. In this paper, we introduce sliced recurrent neural networks (SRNNs), which could be parallelized by slicing the sequences into many subsequences. SRNNs have the ability to obtain high-level information through multiple layers with few extra parameters. We prove that the standard RNN is a special case of the SRNN when we use linear activation functions. Without changing the recurrent units, SRNNs are 136 times as fast as standard RNNs and could be even faster when we train longer sequences. Experiments on six largescale sentiment analysis datasets show that SRNNs achieve better performance than standard RNNs.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Amazon Review FullSRNNAccuracy61.65Unverified
Amazon Review PolaritySRNNAccuracy95.26Unverified
Yelp Binary classificationSRNNError3.96Unverified

Reproductions