Dynamic Evaluation of Transformer Language Models
2019-04-17Code Available0· sign in to hype
Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/benkrause/dynamiceval-transformerIn papertf★ 0
Abstract
This research note combines two methods that have recently improved the state of the art in language modeling: Transformers and dynamic evaluation. Transformers use stacked layers of self-attention that allow them to capture long range dependencies in sequential data. Dynamic evaluation fits models to the recent sequence history, allowing them to assign higher probabilities to re-occurring sequential patterns. By applying dynamic evaluation to Transformer-XL models, we improve the state of the art on enwik8 from 0.99 to 0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3 to 16.4 perplexity points.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| enwik8 | Transformer-XL (24 layers, RMS dynamic eval, decay) | Bit per Character (BPC) | 0.94 | — | Unverified |
| Hutter Prize | Transformer-XL + RMS dynamic eval | Bit per Character (BPC) | 0.94 | — | Unverified |
| Text8 | Transformer-XL + RMS dynamic eval + decay | Bit per Character (BPC) | 1.04 | — | Unverified |
| WikiText-103 | Transformer-XL (RMS dynamic eval) | Test perplexity | 16.4 | — | Unverified |
| WikiText-103 | Transformer-XL (SGD dynamic eval) | Test perplexity | 17 | — | Unverified |