SOTAVerified

Inducing Epistemological Humility in Large Language Models: A Targeted SFT Approach to Reducing Hallucination

2026-03-18Unverified0· sign in to hype

Cem Uluoglakci, Tugba Taskaya Temizel

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large language models (LLMs) often hallucinate, producing fluent but false information, partly because supervised fine-tuning (SFT) implicitly rewards always responding. We introduce HypoTermInstruct, an SFT dataset (31,487 responses for 11,151 questions) designed to teach models epistemological humility-the ability to recognize the limits of their own knowledge and admit uncertainty. This is achieved through questions about non-existent "hypothetical" terms. We also release HypoTermQA-Enhanced, a benchmark for hallucination tendency strengthened through multiple validations. We conducted 800 controlled LoRA SFT runs across Llama3.1-8B and Gemma3-4B (base and instruct), testing 100 fine-tuning configurations with paired controls. Our results demonstrate that replacing generic instruction data with HypoTermInstruct significantly improves the HypoTerm Score (median increases of 0.19% to 25.91%) and FactScore (+0.39% to +0.86%), while maintaining stable performance on MMLU (minimal decreases of 0.26% to 0.35%). Our work demonstrates that targeted, high-quality SFT data teaching meta-cognitive skills can effectively reduce hallucination without preference/RL pipelines, providing mechanistic insights and a practical path toward more reliable AI systems.

Reproductions