SOTAVerified

nlg evaluation

Evaluate the generated text by NLG (Natural Language Generation) systems, like large language models

Papers

Showing 5160 of 71 papers

TitleStatusHype
A Survey of Natural Language Generation0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language GenerationCode1
Perturbation CheckLists for Evaluating NLG Evaluation MetricsCode0
Language Model Augmented Relevance Score0
All That's `Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
MIPE: A Metric Independent Pipeline for Effective Code-Mixed NLG Evaluation0
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
Show:102550
← PrevPage 6 of 8Next →

No leaderboard results yet.