SOTAVerified

Dialogue Evaluation

Papers

Showing 2130 of 97 papers

TitleStatusHype
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog GenerationCode1
Learning an Unreferenced Metric for Online Dialogue EvaluationCode1
PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative Dialogue SystemsCode1
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog SystemsCode1
DRE: An Effective Dual-Refined Method for Integrating Small and Large Language Models in Open-Domain Dialogue Evaluation0
MEDAL: A Framework for Benchmarking LLMs as Multilingual Open-Domain Chatbots and Dialogue EvaluatorsCode0
MARS-Bench: A Multi-turn Athletic Real-world Scenario Benchmark for Dialogue Evaluation0
LeCoDe: A Benchmark Dataset for Interactive Legal Consultation Dialogue Evaluation0
Methods for Recognizing Nested TermsCode0
RuOpinionNE-2024: Extraction of Opinion Tuples from Russian News TextsCode0
Show:102550
← PrevPage 3 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDD-EvalSpearman Correlation0.51Unverified
2Lin-Reg (all)Spearman Correlation0.49Unverified
3USRSpearman Correlation0.42Unverified
4USR - DR (x = c)Spearman Correlation0.32Unverified
5USR - MLMSpearman Correlation0.31Unverified
6USR - DR (x = f)Spearman Correlation0.14Unverified
#ModelMetricClaimedVerifiedStatus
1Lin-Reg (all)Spearman Correlation0.54Unverified
2USR - DR (x = c)Spearman Correlation0.48Unverified
3USRSpearman Correlation0.47Unverified
4USR - MLMSpearman Correlation0.08Unverified
5USR - DR (x = f)Spearman Correlation-0.05Unverified