ZeroSumEval: An Extensible Framework For Scaling LLM Evaluation with Inter-Model Competition
Hisham A. Alyahya, Haidar Khan, Yazeed Alnumay, M Saiful Bari, Bülent Yener
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/zerosumeval/zerosumevalOfficialIn papernone★ 35
- github.com/facebookresearch/zerosumevalnone★ 34
Abstract
We introduce ZeroSumEval, a dynamic, competition-based, and evolving evaluation framework for Large Language Models (LLMs) that leverages competitive games. ZeroSumEval encompasses a diverse suite of games, including security challenges (Capture the Flag), classic board games (chess), and knowledge tests (MathQuiz). These games are designed to evaluate a range of capabilities such as strategic reasoning, planning, knowledge application, safety, and adaptability. Building upon recent studies that highlight the effectiveness of game-based evaluations for LLMs, ZeroSumEval enhances these approaches by providing a standardized and extensible framework for easily implementing games and leverages DSPy to provide a better abstraction for LLM player strategies.