SOTAVerified

Best practices for the human evaluation of automatically generated text

2019-10-01WS 2019Unverified0· sign in to hype

Chris van der Lee, Albert Gatt, Emiel van Miltenburg, S Wubben, er, Emiel Krahmer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.

Tasks

Reproductions