SOTAVerified

nlg evaluation

Evaluate the generated text by NLG (Natural Language Generation) systems, like large language models

Papers

Showing 150 of 71 papers

TitleStatusHype
NLG Evaluation Metrics Beyond Correlation Analysis: An Empirical Metric Preference ChecklistCode3
Towards a Unified Multi-Dimensional Evaluator for Text GenerationCode2
Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error SynthesisCode1
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language GenerationCode1
Leveraging Large Language Models for NLG Evaluation: Advances and ChallengesCode1
Evaluating Evaluation Metrics: A Framework for Analyzing NLG Evaluation Metrics using Measurement TheoryCode1
G-Eval: NLG Evaluation using GPT-4 with Better Human AlignmentCode1
LUNA: A Framework for Language Understanding and Naturalness AssessmentCode1
Themis: A Reference-free NLG Evaluation Language Model with Flexibility and InterpretabilityCode1
Active Evaluation: Efficient NLG Evaluation with Few Pairwise ComparisonsCode1
Is ChatGPT a Good NLG Evaluator? A Preliminary StudyCode1
CoAScore: Chain-of-Aspects Prompting for NLG Evaluation0
Treat the system like a human student: Automatic naturalness evaluation of generated text without reference texts0
WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models0
A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation -- through the Lens of Semantic Similarity Rating0
X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects0
Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons0
A Snapshot of NLG Evaluation Practices 2005 - 20140
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications0
DeepSeek vs. o3-mini: How Well can Reasoning LLMs Evaluate MT and Summarization?0
A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation – through the Lens of Semantic Similarity Rating0
A Survey of Evaluation Metrics Used for NLG Systems0
DHP Benchmark: Are LLMs Good NLG Evaluators?0
Dialect-robust Evaluation of Generated Text0
Dolphin: A Challenging and Diverse Benchmark for Arabic NLG0
NLG-Metricverse: An End-to-End Library for Evaluating Natural Language Generation0
Evaluation of Text Generation: A Survey0
Evaluation rules! On the use of grammars and rule-based systems for NLG evaluation0
Exploring the Multilingual NLG Evaluation Abilities of LLM-Based Evaluators0
The Authenticity Gap in Human Evaluation0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
Language Model Augmented Relevance Score0
Large Language Models Are Active Critics in NLG Evaluation0
LLM-based NLG Evaluation: Current Status and Challenges0
A Survey of Natural Language Generation0
MIPE: A Metric Independent Pipeline for Effective Code-Mixed NLG Evaluation0
A Tutorial on Evaluation Metrics used in Natural Language Generation0
Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking0
Agreement is overrated: A plea for correlation to assess human evaluation reliability0
Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts0
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
All That's `Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text0
Rethinking Model Evaluation as Narrowing the Socio-Technical Gap0
SAGEval: The frontiers of Satisfactory Agent based NLG Evaluation for reference-free open-ended text0
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation0
The Pitfalls of Defining Hallucination0
The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations0
LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language ModelsCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.