SOTAVerified

Dialogue Evaluation

Papers

Showing 5175 of 97 papers

TitleStatusHype
CodingTeachLLM: Empowering LLM's Coding Ability via AST Prior Knowledge0
Explaining Dialogue Evaluation Metrics using Adversarial Behavioral Analysis0
Treating Dialogue Quality Evaluation as an Anomaly Detection Problem0
U-NEED: A Fine-grained Dataset for User Needs-Centric E-commerce Conversational Recommendation0
User Response and Sentiment Prediction for Automatic Dialogue Evaluation0
WeChat AI & ICT's Submission for DSTC9 Interactive Dialogue Evaluation Track0
FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows0
Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings0
How to Choose How to Choose Your Chatbot: A Massively Multi-System MultiReference Data Set for Dialog Metric Evaluation0
How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning0
xDial-Eval: A Multilingual Open-Domain Dialogue Evaluation BenchmarkCode0
Achieving Reliable Human Assessment of Open-Domain Dialogue SystemsCode0
A Comprehensive Analysis of the Effectiveness of Large Language Models as Automatic Dialogue EvaluatorsCode0
Adversarial Learning for Neural Dialogue GenerationCode0
A Human-machine Collaborative Framework for Evaluating Malevolence in DialoguesCode0
An Adversarially-Learned Turing Test for Dialog Generation ModelsCode0
Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog SystemsCode0
BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue Response GenerationCode0
C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue EvaluationCode0
DEAM: Dialogue Coherence Evaluation using AMR-based Semantic ManipulationsCode0
Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue SystemsCode0
ECoh: Turn-level Coherence Evaluation for Multilingual DialoguesCode0
Evaluating Coherence in Dialogue Systems using EntailmentCode0
Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue EvaluationCode0
GCDF1: A Goal- and Context- Driven F-Score for Evaluating User ModelsCode0
Show:102550
← PrevPage 3 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDD-EvalSpearman Correlation0.51Unverified
2Lin-Reg (all)Spearman Correlation0.49Unverified
3USRSpearman Correlation0.42Unverified
4USR - DR (x = c)Spearman Correlation0.32Unverified
5USR - MLMSpearman Correlation0.31Unverified
6USR - DR (x = f)Spearman Correlation0.14Unverified
#ModelMetricClaimedVerifiedStatus
1Lin-Reg (all)Spearman Correlation0.54Unverified
2USR - DR (x = c)Spearman Correlation0.48Unverified
3USRSpearman Correlation0.47Unverified
4USR - MLMSpearman Correlation0.08Unverified
5USR - DR (x = f)Spearman Correlation-0.05Unverified