Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/kimiyoung/transformer-xlOfficialIn paperpytorch★ 0
- github.com/sooftware/conformerpytorch★ 1,108
- github.com/mustafaaljadery/gemma-2b-10mpytorch★ 937
- github.com/google-research/meliadjax★ 259
- github.com/shanghai-digital-brain-laboratory/bdm-db1pytorch★ 134
- github.com/aiha-lab/Attention-Head-Pruningpytorch★ 22
- github.com/park-cheol/ASR-Conformerpytorch★ 15
- github.com/Jmkernes/PAR-Transformer-XLtf★ 7
- github.com/zhdbwe/Paper-DailyReadingtf★ 5
- github.com/AIResearchHub/transformergallerypytorch★ 4
Abstract
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| enwik8 | Transformer-XL (24 layers) | Bit per Character (BPC) | 0.99 | — | Unverified |
| enwik8 | Transformer-XL (12 layers) | Bit per Character (BPC) | 1.06 | — | Unverified |
| enwik8 | Transformer-XL (18 layers) | Bit per Character (BPC) | 1.03 | — | Unverified |
| Hutter Prize | 12-layer Transformer-XL | Bit per Character (BPC) | 1.06 | — | Unverified |
| Hutter Prize | 24-layer Transformer-XL | Bit per Character (BPC) | 0.99 | — | Unverified |
| Hutter Prize | 18-layer Transformer-XL | Bit per Character (BPC) | 1.03 | — | Unverified |
| One Billion Word | Transformer-XL Large | PPL | 21.8 | — | Unverified |
| One Billion Word | Transformer-XL Base | PPL | 23.5 | — | Unverified |
| Penn Treebank (Word Level) | Transformer-XL | Test perplexity | 54.55 | — | Unverified |
| Text8 | Transformer-XL - 24 layers | Bit per Character (BPC) | 1.08 | — | Unverified |
| WikiText-103 | Transformer-XL Standard | Test perplexity | 24 | — | Unverified |
| WikiText-103 | Transformer-XL Large | Test perplexity | 18.3 | — | Unverified |