SOTAVerified

Character-Level Language Modeling with Deeper Self-Attention

2018-08-09Code Available0· sign in to hype

Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
enwik8Transformer (64 layers)Bit per Character (BPC)1.06Unverified
enwik864-layer Character Transformer ModelBit per Character (BPC)1.11Unverified
Hutter Prize64-layer Character Transformer ModelBit per Character (BPC)1.06Unverified
Hutter Prize12-layer Character Transformer ModelBit per Character (BPC)1.11Unverified
Text864-layer Character Transformer ModelBit per Character (BPC)1.13Unverified
Text812-layer Character Transformer ModelBit per Character (BPC)1.18Unverified

Reproductions