When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute
Tao Lei
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/asappresearch/sruOfficialIn paperpytorch★ 2,112
Abstract
Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| enwik8 | SRU++ Large | Bit per Character (BPC) | 0.95 | — | Unverified |
| enwik8 | SRU++ Base | Bit per Character (BPC) | 0.97 | — | Unverified |
| One Billion Word | SRU++ Large | PPL | 23.5 | — | Unverified |
| One Billion Word | SRU++ | PPL | 25.1 | — | Unverified |
| WikiText-103 | SRU++ Large | Test perplexity | 17.1 | — | Unverified |
| WikiText-103 | SRU++ Base | Test perplexity | 18.3 | — | Unverified |