PingPong: A Benchmark for Role-Playing Language Models with User Emulation and Multi-Model Evaluation
Ilya Gusev
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ilyagusev/ping_pong_benchOfficialIn papernone★ 116
Abstract
We introduce a benchmark for evaluating the role-playing capabilities of language models. Our approach leverages language models themselves to emulate users in dynamic, multi-turn conversations and to assess the resulting dialogues. The framework consists of three main components: a player model that assumes a specific character role, an interrogator model that simulates user behavior, and several judge models that evaluate conversation quality. We conducted experiments comparing automated evaluations with human annotations to validate our approach, demonstrating strong correlations across multiple criteria. This work provides a foundation for a robust and dynamic evaluation of the model capabilities in interactive scenarios.