Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and Misevaluation
2021-04-01EACL (HumEval) 2021Unverified0· sign in to hype
Sheila Castilho
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Document-level human evaluation of machine translation (MT) has been raising interest in the community. However, little is known about the issues of using document-level methodologies to assess MT quality. In this article, we compare the inter-annotator agreement (IAA) scores, the effort to assess the quality in different document-level methodologies, and the issue of misevaluation when sentences are evaluated out of context.