SOTAVerified

Dialogue Evaluation

Papers

Showing 150 of 97 papers

TitleStatusHype
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language ModelsCode2
PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative Dialogue SystemsCode1
Don't Forget Your ABC's: Evaluating the State-of-the-Art in Chat-Oriented Dialogue SystemsCode1
Learning an Unreferenced Metric for Online Dialogue EvaluationCode1
DynaEval: Unifying Turn and Dialogue Level EvaluationCode1
Assessing Dialogue Systems with Distribution DistancesCode1
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction TuningCode1
Towards Holistic and Automatic Evaluation of Open-Domain Dialogue GenerationCode1
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale PretrainingCode1
Automatic Evaluation and Moderation of Open-domain Dialogue SystemsCode1
Findings of the The RuATD Shared Task 2022 on Artificial Text Detection in RussianCode1
FineD-Eval: Fine-grained Automatic Dialogue-Level EvaluationCode1
GRADE: Automatic Graph-Enhanced Coherence Metric for Evaluating Open-Domain Dialogue SystemsCode1
RuNNE-2022 Shared Task: Recognizing Nested Named EntitiesCode1
GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue GenerationCode1
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog GenerationCode1
Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue UtterancesCode1
A Comprehensive Assessment of Dialog Evaluation MetricsCode1
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog SystemsCode1
Unsupervised Evaluation of Interactive Dialog with DialoGPTCode1
Towards Quantifiable Dialogue Coherence EvaluationCode1
DEnsity: Open-domain Dialogue Evaluation Metric using Density EstimationCode1
Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question AnsweringCode1
DialogBench: Evaluating LLMs as Human-like Dialogue SystemsCode1
xDial-Eval: A Multilingual Open-Domain Dialogue Evaluation BenchmarkCode0
Achieving Reliable Human Assessment of Open-Domain Dialogue SystemsCode0
A Comprehensive Analysis of the Effectiveness of Large Language Models as Automatic Dialogue EvaluatorsCode0
Adversarial Learning for Neural Dialogue GenerationCode0
A Human-machine Collaborative Framework for Evaluating Malevolence in DialoguesCode0
An Adversarially-Learned Turing Test for Dialog Generation ModelsCode0
Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog SystemsCode0
BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue Response GenerationCode0
C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue EvaluationCode0
DEAM: Dialogue Coherence Evaluation using AMR-based Semantic ManipulationsCode0
Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue SystemsCode0
ECoh: Turn-level Coherence Evaluation for Multilingual DialoguesCode0
Evaluating Coherence in Dialogue Systems using EntailmentCode0
Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue EvaluationCode0
GCDF1: A Goal- and Context- Driven F-Score for Evaluating User ModelsCode0
Generating Negative Samples by Manipulating Golden Responses for Unsupervised Learning of a Response Evaluation ModelCode0
Improving Automated Evaluation of Open Domain Dialog via Diverse Reference AugmentationCode0
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple ReferencesCode0
MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue EvaluationCode0
Measuring the Robustness of Reference-Free Dialogue Evaluation SystemsCode0
MEDAL: A Framework for Benchmarking LLMs as Multilingual Open-Domain Chatbots and Dialogue EvaluatorsCode0
Methods for Recognizing Nested TermsCode0
PairEval: Open-domain Dialogue Evaluation with Pairwise ComparisonCode0
Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue SystemsCode0
Proxy Indicators for the Quality of Open-domain DialoguesCode0
RuOpinionNE-2024: Extraction of Opinion Tuples from Russian News TextsCode0
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDD-EvalSpearman Correlation0.51Unverified
2Lin-Reg (all)Spearman Correlation0.49Unverified
3USRSpearman Correlation0.42Unverified
4USR - DR (x = c)Spearman Correlation0.32Unverified
5USR - MLMSpearman Correlation0.31Unverified
6USR - DR (x = f)Spearman Correlation0.14Unverified
#ModelMetricClaimedVerifiedStatus
1Lin-Reg (all)Spearman Correlation0.54Unverified
2USR - DR (x = c)Spearman Correlation0.48Unverified
3USRSpearman Correlation0.47Unverified
4USR - MLMSpearman Correlation0.08Unverified
5USR - DR (x = f)Spearman Correlation-0.05Unverified