SOTAVerified

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

2021-04-18EMNLP 2021Code Available1· sign in to hype

Mozhdeh Gheini, Xiang Ren, Jonathan May

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch. We conduct a series of experiments through fine-tuning a translation model on data where either the source or target language has changed. These experiments reveal that fine-tuning only the cross-attention parameters is nearly as effective as fine-tuning all parameters (i.e., the entire translation model). We provide insights into why this is the case and observe that limiting fine-tuning in this manner yields cross-lingually aligned embeddings. The implications of this finding for researchers and practitioners include a mitigation of catastrophic forgetting, the potential for zero-shot translation, and the ability to extend machine translation models to several new language pairs with reduced parameter storage overhead.

Tasks

Reproductions