SOTAVerified

Consistent Human Evaluation of Machine Translation across Language Pairs

2022-05-17AMTA 2022Unverified0· sign in to hype

Daniel Licht, Cynthia Gao, Janice Lam, Francisco Guzman, Mona Diab, Philipp Koehn

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Obtaining meaningful quality scores for machine translation systems through human evaluation remains a challenge given the high variability between human evaluators, partly due to subjective expectations for translation quality for different language pairs. We propose a new metric called XSTS that is more focused on semantic equivalence and a cross-lingual calibration method that enables more consistent assessment. We demonstrate the effectiveness of these novel contributions in large scale evaluation studies across up to 14 language pairs, with translation both into and out of English.

Tasks

Reproductions