SOTAVerified

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

2019-01-09ACL 2019Code Available1· sign in to hype

Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
enwik8Transformer-XL (24 layers)Bit per Character (BPC)0.99Unverified
enwik8Transformer-XL (12 layers)Bit per Character (BPC)1.06Unverified
enwik8Transformer-XL (18 layers)Bit per Character (BPC)1.03Unverified
Hutter Prize12-layer Transformer-XLBit per Character (BPC)1.06Unverified
Hutter Prize24-layer Transformer-XLBit per Character (BPC)0.99Unverified
Hutter Prize18-layer Transformer-XLBit per Character (BPC)1.03Unverified
One Billion WordTransformer-XL LargePPL21.8Unverified
One Billion WordTransformer-XL BasePPL23.5Unverified
Penn Treebank (Word Level)Transformer-XLTest perplexity54.55Unverified
Text8Transformer-XL - 24 layersBit per Character (BPC)1.08Unverified
WikiText-103Transformer-XL StandardTest perplexity24Unverified
WikiText-103Transformer-XL LargeTest perplexity18.3Unverified

Reproductions