SOTAVerified

Perception Score, A Learned Metric for Open-ended Text Generation Evaluation

2020-08-07Unverified0· sign in to hype

Jing Gu, Qingyang Wu, Zhou Yu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Automatic evaluation for open-ended natural language generation tasks remains a challenge. Existing metrics such as BLEU show a low correlation with human judgment. We propose a novel and powerful learning-based evaluation metric: Perception Score. The method measures the overall quality of the generation and scores holistically instead of only focusing on one evaluation criteria, such as word overlapping. Moreover, it also shows the amount of uncertainty about its evaluation result. By connecting the uncertainty, Perception Score gives a more accurate evaluation for the generation system. Perception Score provides state-of-the-art results on two conditional generation tasks and two unconditional generation tasks.

Tasks

Reproductions