SOTAVerified

Fine-Tuning Language Models to Know What They Know

2026-02-02Code Available0· sign in to hype

Sangjun Park, Elliot Meyerson, Xin Qiu, Risto Miikkulainen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Metacognition is a critical component of intelligence, specifically regarding the awareness of one's own knowledge. While humans rely on shared internal memory for both answering questions and reporting their knowledge state, this dependency in LLMs remains underexplored. This study proposes a framework to measure metacognitive ability d_type2' using a dual-prompt method, followed by the introduction of Evolution Strategy for Metacognitive Alignment (ESMA) to bind a model's internal knowledge to its explicit behaviors. ESMA demonstrates robust generalization across diverse untrained settings, indicating a enhancement in the model's ability to reference its own knowledge. Furthermore, parameter analysis attributes these improvements to a sparse set of significant modifications.

Reproductions