SOTAVerified

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

2021-06-03ACL 2021Code Available0· sign in to hype

Ulme Wennberg, Gustav Eje Henter

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Mechanisms for encoding positional information are central for transformer-based language models. In this paper, we analyze the position embeddings of existing language models, finding strong evidence of translation invariance, both for the embeddings themselves and for their effect on self-attention. The degree of translation invariance increases during training and correlates positively with model performance. Our findings lead us to propose translation-invariant self-attention (TISA), which accounts for the relative position between tokens in an interpretable fashion without needing conventional position embeddings. Our proposal has several theoretical advantages over existing position-representation approaches. Experiments show that it improves on regular ALBERT on GLUE tasks, while only adding orders of magnitude less positional parameters.

Tasks

Reproductions