SOTAVerified

Adaptive Attention Span in Transformers

2019-05-19ACL 2019Code Available1· sign in to hype

Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
enwik8Transformer (24 layers, 8k adaptive span)Bit per Character (BPC)0.98Unverified
enwik8Transformer (12 layers, 8k adaptive span)Bit per Character (BPC)1.02Unverified
Text824L Transformer + 8K adaptive spanBit per Character (BPC)1.07Unverified
Text812L Transformer + 8K adaptive spanBit per Character (BPC)1.11Unverified

Reproductions