SOTAVerified

Improve Transformer Models with Better Relative Position Embeddings

2020-09-28Findings of the Association for Computational LinguisticsCode Available0· sign in to hype

Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer architectures rely on explicit position encodings in order to preserve a notion of word order. In this paper, we argue that existing work does not fully utilize position information. For example, the initial proposal of a sinusoid embedding is fixed and not learnable. In this paper, we first review absolute position embeddings and existing methods for relative position embeddings. We then propose new techniques that encourage increased interaction between query, key and relative position embeddings in the self-attention mechanism. Our most promising approach is a generalization of the absolute position embedding, improving results on SQuAD1.1 compared to previous position embeddings approaches. In addition, we address the inductive property of whether a position embedding can be robust enough to handle long sequences. We demonstrate empirically that our relative position embedding method is reasonably generalized and robust from the inductive perspective. Finally, we show that our proposed method can be adopted as a near drop-in replacement for improving the accuracy of large models with a small computational budget.

Tasks

Reproductions