| Towards Best Experiment Design for Evaluating Dialogue System Output | Sep 23, 2019 | Dialogue Evaluation | CodeCode Available | 0 | 5 |
| MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for Evaluating Open-Domain Dialogue | Jun 19, 2022 | Dialogue EvaluationMME | —Unverified | 0 | 0 |
| One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning | May 8, 2018 | AllDialogue Evaluation | —Unverified | 0 | 0 |
| On the Benchmarking of LLMs for Open-Domain Dialogue Evaluation | Jul 4, 2024 | BenchmarkingChatbot | —Unverified | 0 | 0 |
| U-NEED: A Fine-grained Dataset for User Needs-Centric E-commerce Conversational Recommendation | May 5, 2023 | Conversational RecommendationDialogue Evaluation | —Unverified | 0 | 0 |
| PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment | Dec 18, 2022 | Data AugmentationDialogue Evaluation | —Unverified | 0 | 0 |
| Dialogue You Can Trust: Human and AI Perspectives on Generated Conversations | Sep 3, 2024 | Dialogue Evaluation | —Unverified | 0 | 0 |
| Pragmatically Appropriate Diversity for Dialogue Evaluation | Apr 6, 2023 | Dialogue EvaluationDiversity | —Unverified | 0 | 0 |
| Predicting Ratings of Real Dialogue Participants from Artificial Data and Ratings of Human Dialogue Observers | May 1, 2020 | Dialogue Evaluation | —Unverified | 0 | 0 |
| ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons | Sep 6, 2019 | Dialogue Evaluation | —Unverified | 0 | 0 |