SOTAVerified

Dialogue Evaluation

Papers

Showing 3140 of 97 papers

TitleStatusHype
CodingTeachLLM: Empowering LLM's Coding Ability via AST Prior Knowledge0
Dialogue Evaluation with Offline Reinforcement Learning0
ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons0
MARS-Bench: A Multi-turn Athletic Real-world Scenario Benchmark for Dialogue Evaluation0
Learning the Human Judgment for the Automatic Evaluation of Chatbot0
DCH-2: A Parallel Customer-Helpdesk Dialogue Corpus with Distributions of Annotators' Labels0
Joint Goal Segmentation and Goal Success Prediction on Multi-Domain Conversations0
LeCoDe: A Benchmark Dataset for Interactive Legal Consultation Dialogue Evaluation0
Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents0
Improving Open-Domain Dialogue Evaluation with a Causal Inference Model0
Show:102550
← PrevPage 4 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDD-EvalSpearman Correlation0.51Unverified
2Lin-Reg (all)Spearman Correlation0.49Unverified
3USRSpearman Correlation0.42Unverified
4USR - DR (x = c)Spearman Correlation0.32Unverified
5USR - MLMSpearman Correlation0.31Unverified
6USR - DR (x = f)Spearman Correlation0.14Unverified
#ModelMetricClaimedVerifiedStatus
1Lin-Reg (all)Spearman Correlation0.54Unverified
2USR - DR (x = c)Spearman Correlation0.48Unverified
3USRSpearman Correlation0.47Unverified
4USR - MLMSpearman Correlation0.08Unverified
5USR - DR (x = f)Spearman Correlation-0.05Unverified