SOTAVerified

Dialogue Evaluation

Papers

Showing 3140 of 97 papers

TitleStatusHype
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple ReferencesCode0
PairEval: Open-domain Dialogue Evaluation with Pairwise ComparisonCode0
Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue SystemsCode0
Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog SystemsCode0
DEAM: Dialogue Coherence Evaluation using AMR-based Semantic ManipulationsCode0
An Adversarially-Learned Turing Test for Dialog Generation ModelsCode0
Towards Best Experiment Design for Evaluating Dialogue System OutputCode0
Generating Negative Samples by Manipulating Golden Responses for Unsupervised Learning of a Response Evaluation ModelCode0
GCDF1: A Goal- and Context- Driven F-Score for Evaluating User ModelsCode0
C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue EvaluationCode0
Show:102550
← PrevPage 4 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDD-EvalSpearman Correlation0.51Unverified
2Lin-Reg (all)Spearman Correlation0.49Unverified
3USRSpearman Correlation0.42Unverified
4USR - DR (x = c)Spearman Correlation0.32Unverified
5USR - MLMSpearman Correlation0.31Unverified
6USR - DR (x = f)Spearman Correlation0.14Unverified
#ModelMetricClaimedVerifiedStatus
1Lin-Reg (all)Spearman Correlation0.54Unverified
2USR - DR (x = c)Spearman Correlation0.48Unverified
3USRSpearman Correlation0.47Unverified
4USR - MLMSpearman Correlation0.08Unverified
5USR - DR (x = f)Spearman Correlation-0.05Unverified