SOTAVerified

Sparsifying Transformer Models with Trainable Representation Pooling

2020-09-10ACL 2022Code Available1· sign in to hype

Michał Pietruszka, Łukasz Borchmann, Łukasz Garncarek

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-k operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling, we can retain its top quality, while being 1.8 faster during training, 4.5 faster during inference, and up to 13 more computationally efficient in the decoder.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
arXiv Summarization DatasetDeepPyramidionROUGE-147.15Unverified
arXiv Summarization DatasetBlockwise (baseline)ROUGE-146.85Unverified
PubmedDeepPyramidionROUGE-147.81Unverified

Reproductions