SOTAVerified

LLM-based NLG Evaluation: Current Status and Challenges

2024-02-02Unverified0· sign in to hype

Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, Xiaojun Wan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Evaluating natural language generation (NLG) is a vital but challenging problem in natural language processing. Traditional evaluation metrics mainly capturing content (e.g. n-gram) overlap between system outputs and references are far from satisfactory, and large language models (LLMs) such as ChatGPT have demonstrated great potential in NLG evaluation in recent years. Various automatic evaluation methods based on LLMs have been proposed, including metrics derived from LLMs, prompting LLMs, fine-tuning LLMs, and human-LLM collaborative evaluation. In this survey, we first give a taxonomy of LLM-based NLG evaluation methods, and discuss their pros and cons, respectively. Lastly, we discuss several open problems in this area and point out future research directions.

Tasks

Reproductions