SOTAVerified

nlg evaluation

Evaluate the generated text by NLG (Natural Language Generation) systems, like large language models

Papers

Showing 150 of 71 papers

TitleStatusHype
NLG Evaluation Metrics Beyond Correlation Analysis: An Empirical Metric Preference ChecklistCode3
Towards a Unified Multi-Dimensional Evaluator for Text GenerationCode2
Evaluating Evaluation Metrics: A Framework for Analyzing NLG Evaluation Metrics using Measurement TheoryCode1
Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error SynthesisCode1
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language GenerationCode1
LUNA: A Framework for Language Understanding and Naturalness AssessmentCode1
Active Evaluation: Efficient NLG Evaluation with Few Pairwise ComparisonsCode1
Leveraging Large Language Models for NLG Evaluation: Advances and ChallengesCode1
Is ChatGPT a Good NLG Evaluator? A Preliminary StudyCode1
Themis: A Reference-free NLG Evaluation Language Model with Flexibility and InterpretabilityCode1
G-Eval: NLG Evaluation using GPT-4 with Better Human AlignmentCode1
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and UnderstandingCode0
Analyzing and Evaluating Correlation Measures in NLG Meta-EvaluationCode0
Are LLM-based Evaluators Confusing NLG Quality Criteria?Code0
A Study of Automatic Metrics for the Evaluation of Natural Language ExplanationsCode0
Better than Random: Reliable NLG Human Evaluation with Constrained Active SamplingCode0
EffEval: A Comprehensive Evaluation of Efficiency for MT Evaluation MetricsCode0
CLSE: Corpus of Linguistically Significant EntitiesCode0
DEBATE: Devil's Advocate-Based Assessment and Text EvaluationCode0
DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question AnsweringCode0
Defining and Detecting Vulnerability in Human Evaluation Guidelines: A Preliminary Study Towards Reliable NLG EvaluationCode0
Describe me an Aucklet: Generating Grounded Perceptual Category DescriptionsCode0
Long-Form Information Alignment Evaluation Beyond Atomic FactsCode0
Near-Negative Distinction: Giving a Second Life to Human Evaluation DatasetsCode0
Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying ReferencesCode0
One Prompt To Rule Them All: LLMs for Opinion Summary EvaluationCode0
OpeNLGauge: An Explainable Metric for NLG Evaluation with Open-Weights LLMsCode0
Perturbation CheckLists for Evaluating NLG Evaluation MetricsCode0
ReFeR: Improving Evaluation and Reasoning through Hierarchy of ModelsCode0
Towards Multiple References Era -- Addressing Data Leakage and Limited Reference Diversity in NLG EvaluationCode0
Unveiling the Achilles' Heel of NLG Evaluators: A Unified Adversarial Framework Driven by Large Language ModelsCode0
Why We Need New Evaluation Metrics for NLGCode0
LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language ModelsCode0
Evaluation rules! On the use of grammars and rule-based systems for NLG evaluation0
Exploring the Multilingual NLG Evaluation Abilities of LLM-Based Evaluators0
A Survey of Natural Language Generation0
The Authenticity Gap in Human Evaluation0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation0
A Survey of Evaluation Metrics Used for NLG Systems0
Language Model Augmented Relevance Score0
Large Language Models Are Active Critics in NLG Evaluation0
A Snapshot of NLG Evaluation Practices 2005 - 20140
LLM-based NLG Evaluation: Current Status and Challenges0
A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation – through the Lens of Semantic Similarity Rating0
All That's `Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
MIPE: A Metric Independent Pipeline for Effective Code-Mixed NLG Evaluation0
The Pitfalls of Defining Hallucination0
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text0
NLG-Metricverse: An End-to-End Library for Evaluating Natural Language Generation0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.