SOTAVerified

On Biasing Transformer Attention Towards Monotonicity

2021-04-08NAACL 2021Code Available0· sign in to hype

Annette Rios, Chantal Amrhein, Noëmi Aepli, Rico Sennrich

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining. In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization. Experiments show that we can achieve largely monotonic behavior. Performance is mixed, with larger gains on top of RNN baselines. General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior.

Tasks

Reproductions