SOTAVerified

When Agents Disagree With Themselves: Measuring Behavioral Consistency in LLM-Based Agents

2026-02-12Code Available0· sign in to hype

Aman Mehta

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Run the same LLM agent on the same task twice: do you get the same behavior? We find the answer is often no. In a study of 3,000 agent runs across three models (Llama 3.1 70B, GPT-4o, and Claude Sonnet 4.5) on HotpotQA, we observe that ReAct-style agents produce 2.0--4.2 distinct action sequences per 10 runs on average, even with identical inputs. More importantly, this variance predicts failure: tasks with consistent behavior (2 unique paths) achieve 80--92% accuracy, while highly inconsistent tasks (6 unique paths) achieve only 25--60%, a 32--55 percentage point gap depending on model. We trace variance to early decisions: 69% of divergence occurs at step 2, the first search query. Our results suggest that monitoring behavioral consistency during execution could enable early error detection and improve agent reliability.

Reproductions