SOTAVerified

Handling Syntactic Divergence in Low-resource Machine Translation

2019-08-30IJCNLP 2019Code Available0· sign in to hype

Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs. Data augmentation methods such as back-translation make it possible to use monolingual data to help alleviate these issues, but back-translation itself fails in extreme low-resource scenarios, especially for syntactically divergent languages. In this paper, we propose a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision. Experiments with simulated low-resource Japanese-to-English, and real low-resource Uyghur-to-English scenarios find significant improvements over other semi-supervised alternatives.

Tasks

Reproductions