SOTAVerified

Emergent LLM behaviors are observationally equivalent to data leakage

2025-05-26Code Available0· sign in to hype

Christopher Barrie, Petter Törnberg

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Ashery et al. recently argue that large language models (LLMs), when paired to play a classic "naming game," spontaneously develop linguistic conventions reminiscent of human social norms. Here, we show that their results are better explained by data leakage: the models simply reproduce conventions they already encountered during pre-training. Despite the authors' mitigation measures, we provide multiple analyses demonstrating that the LLMs recognize the structure of the coordination game and recall its outcomes, rather than exhibit "emergent" conventions. Consequently, the observed behaviors are indistinguishable from memorization of the training corpus. We conclude by pointing to potential alternative strategies and reflecting more generally on the place of LLMs for social science models.

Tasks

Reproductions