SOTAVerified

Exploring Diversity in Back Translation for Low-Resource Machine Translation

2022-06-01DeepLo 2022Code Available0· sign in to hype

Laurie Burchell, Alexandra Birch, Kenneth Heafield

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Back translation is one of the most widely used methods for improving the performance of neural machine translation systems. Recent research has sought to enhance the effectiveness of this method by increasing the 'diversity' of the generated translations. We argue that the definitions and metrics used to quantify 'diversity' in previous work have been insufficient. This work puts forward a more nuanced framework for understanding diversity in training data, splitting it into lexical diversity and syntactic diversity. We present novel metrics for measuring these different aspects of diversity and carry out empirical analysis into the effect of these types of diversity on final neural machine translation model performance for low-resource EnglishTurkish and mid-resource EnglishIcelandic. Our findings show that generating back translation using nucleus sampling results in higher final model performance, and that this method of generation has high levels of both lexical and syntactic diversity. We also find evidence that lexical diversity is more important than syntactic for back translation performance.

Tasks

Reproductions