SOTAVerified

AligNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate

2021-09-14EMNLP 2021Unverified0· sign in to hype

Jongyoon Song, Sungwon Kim, Sungroh Yoon

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Non-autoregressive neural machine translation (NART) models suffer from the multi-modality problem which causes translation inconsistency such as token repetition. Most recent approaches have attempted to solve this problem by implicitly modeling dependencies between outputs. In this paper, we introduce AligNART, which leverages full alignment information to explicitly reduce the modality of the target distribution. AligNART divides the machine translation task into (i) alignment estimation and (ii) translation with aligned decoder inputs, guiding the decoder to focus on simplified one-to-one translation. To alleviate the alignment estimation problem, we further propose a novel alignment decomposition method. Our experiments show that AligNART outperforms previous non-iterative NART models that focus on explicit modality reduction on WMT14 EnDe and WMT16 RoEn. Furthermore, AligNART achieves BLEU scores comparable to those of the state-of-the-art connectionist temporal classification based models on WMT14 EnDe. We also observe that AligNART effectively addresses the token repetition problem even without sequence-level knowledge distillation.

Tasks

Reproductions