Alleviating the Inequality of Attention Heads for Neural Machine Translation
2020-09-21COLING 2022Unverified0· sign in to hype
Zewei Sun, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Recent studies show that the attention heads in Transformer are not equal. We relate this phenomenon to the imbalance training of multi-head attention and the model dependence on specific heads. To tackle this problem, we propose a simple masking method: HeadMask, in two specific ways. Experiments show that translation improvements are achieved on multiple language pairs. Subsequent empirical analyses also support our assumption and confirm the effectiveness of the method.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| IWSLT2015 Vietnamese-English | HeadMask (Random-18) | BLEU | 26.85 | — | Unverified |
| IWSLT2015 Vietnamese-English | HeadMask (Impt-18) | BLEU | 26.36 | — | Unverified |
| WMT2016 Romanian-English | HeadMask (Impt-18) | BLEU score | 32.95 | — | Unverified |
| WMT2016 Romanian-English | HeadMask (Random-18) | BLEU score | 32.85 | — | Unverified |
| WMT2017 Turkish-English | HeadMask (Random-18) | BLEU score | 17.56 | — | Unverified |
| WMT2017 Turkish-English | HeadMask (Impt-18) | BLEU score | 17.48 | — | Unverified |