SOTAVerified

LLMs are Capable of Misaligned Behavior Under Explicit Prohibition and Surveillance

2025-06-30Code Available0· sign in to hype

Igor Ivanov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, LLMs are tasked with completing an impossible quiz, while they are in a sandbox, monitored, told about these measures and instructed not to cheat. Some frontier LLMs cheat consistently and attempt to circumvent restrictions despite everything. The results reveal a fundamental tension between goal-directed behavior and alignment in current LLMs. The code and evaluation logs are available at github.com/baceolus/cheating_evals

Reproductions