SOTAVerified

Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models

2021-06-10ACL 2021Code Available1· sign in to hype

Tyler A. Chang, Yifan Xu, Weijian Xu, Zhuowen Tu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we detail the relationship between convolutions and self-attention in natural language tasks. We show that relative position embeddings in self-attention layers are equivalent to recently-proposed dynamic lightweight convolutions, and we consider multiple new ways of integrating convolutions into Transformer self-attention. Specifically, we propose composite attention, which unites previous relative position embedding methods under a convolutional framework. We conduct experiments by training BERT with composite attention, finding that convolutions consistently improve performance on multiple downstream tasks, replacing absolute position embeddings. To inform future work, we present results comparing lightweight convolutions, dynamic convolutions, and depthwise-separable convolutions in language model pre-training, considering multiple injection points for convolutions in self-attention layers.

Tasks

Reproductions