SOTAVerified

Reasoning Under Pressure: How do Training Incentives Influence Chain-of-Thought Monitorability?

2025-12-08Code Available0· sign in to hype

Matt MacDermott, Qiyao Wei, Rada Djoneva, Francis Rhys Ward

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

AI systems that output their reasoning in natural language offer an opportunity for safety -- we can monitor their chain of thought (CoT) for undesirable reasoning, such as the pursuit of harmful objectives. However, the extent to which CoT faithfully reflects the underlying reasoning process, and hence the extent to which it can be usefully monitored, may be influenced by certain aspects of training. We investigate how different training incentives, applied to a reasoning model, affect its monitorability. We introduce a novel methodology for measuring monitorability according to whether a monitor can predict a key latent variable using the model's reasoning. When controlling for accuracy, we do not find evidence for consistent effects from commonly used incentives (length penalties and KL regularisation), but we find that adversarial optimisation (penalising monitor accuracy) degrades monitor performance, while direct optimisation for monitorability does not reliably lead to improvements. Our code is available at https://github.com/QiyaoWei/reasoning-under-pressure.

Reproductions