SOTAVerified

Adaptively Sparse Transformers

2019-08-30IJCNLP 2019Code Available1· sign in to hype

Gonçalo M. Correia, Vlad Niculae, André F. T. Martins

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns. This sparsity is accomplished by replacing softmax with -entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the parameter -- which controls the shape and sparsity of -entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets. Findings of the quantitative and qualitative analysis of our approach include that heads in different layers learn different sparsity preferences and tend to be more diverse in their attention distributions than softmax Transformers. Furthermore, at no cost in accuracy, sparsity in attention heads helps to uncover different head specializations.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
IWSLT2017 German-EnglishAdaptively Sparse Transformer (alpha-entmax)BLEU score29.9Unverified
IWSLT2017 German-EnglishAdaptively Sparse Transformer (1.5-entmax)BLEU score29.83Unverified
WMT2014 English-GermanAdaptively Sparse Transformer (alpha-entmax)BLEU score26.93Unverified
WMT2014 English-GermanAdaptively Sparse Transformer (1.5-entmax)BLEU score25.89Unverified
WMT2016 Romanian-EnglishAdaptively Sparse Transformer (1.5-entmax)BLEU score33.1Unverified
WMT2016 Romanian-EnglishAdaptively Sparse Transformer (alpha-entmax)BLEU score32.89Unverified

Reproductions