SOTAVerified

Online Learning Meets Machine Translation Evaluation: Finding the Best Systems with the Least Human Effort

2021-05-27ACL 2021Code Available0· sign in to hype

Vânia Mendonça, Ricardo Rei, Luisa Coheur, Alberto Sardinha, Ana Lúcia Santos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In Machine Translation, assessing the quality of a large amount of automatic translations can be challenging. Automatic metrics are not reliable when it comes to high performing systems. In addition, resorting to human evaluators can be expensive, especially when evaluating multiple systems. To overcome the latter challenge, we propose a novel application of online learning that, given an ensemble of Machine Translation systems, dynamically converges to the best systems, by taking advantage of the human feedback available. Our experiments on WMT'19 datasets show that our online approach quickly converges to the top-3 ranked systems for the language pairs considered, despite the lack of human feedback for many translations.

Tasks

Reproductions