SOTAVerified

GFST: Gender-Filtered Self-Training for More Accurate Gender in Translation

2021-11-01EMNLP 2021Code Available0· sign in to hype

Prafulla Kumar Choubey, Anna Currey, Prashant Mathur, Georgiana Dinu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Targeted evaluations have found that machine translation systems often output incorrect gender in translations, even when the gender is clear from context. Furthermore, these incorrectly gendered translations have the potential to reflect or amplify social biases. We propose gender-filtered self-training (GFST) to improve gender translation accuracy on unambiguously gendered inputs. Our GFST approach uses a source monolingual corpus and an initial model to generate gender-specific pseudo-parallel corpora which are then filtered and added to the training data. We evaluate GFST on translation from English into five languages, finding that it improves gender accuracy without damaging generic quality. We also show the viability of GFST on several experimental settings, including re-training from scratch, fine-tuning, controlling the gender balance of the data, forward translation, and back-translation.

Tasks

Reproductions