SOTAVerified

Perplexity from PLM Is Unreliable for Evaluating Text Quality

2022-10-12Unverified0· sign in to hype

Yequan Wang, Jiawen Deng, Aixin Sun, Xuying Meng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, amounts of works utilize perplexity~(PPL) to evaluate the quality of the generated text. They suppose that if the value of PPL is smaller, the quality(i.e. fluency) of the text to be evaluated is better. However, we find that the PPL referee is unqualified and it cannot evaluate the generated text fairly for the following reasons: (i) The PPL of short text is larger than long text, which goes against common sense, (ii) The repeated text span could damage the performance of PPL, and (iii) The punctuation marks could affect the performance of PPL heavily. Experiments show that the PPL is unreliable for evaluating the quality of given text. Last, we discuss the key problems with evaluating text quality using language models.

Tasks

Reproductions