SOTAVerified

Pay Attention when Required

2020-09-09Code Available0· sign in to hype

Swetha Mandava, Szymon Migacz, Alex Fit Florea

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
enwiki8PAR Transformer 24BBit per Character (BPC)1.11Unverified
Text8PAR Transformer 24BBit per Character (BPC)1.18Unverified
WikiText-103PAR Transformer LargeTest perplexity18.4Unverified
WikiText-103PAR Transformer BaseTest perplexity22.7Unverified

Reproductions