SOTAVerified

Quantifying Genuine Awareness in Hallucination Prediction Beyond Question-Side Shortcuts

2026-03-09Unverified0· sign in to hype

Yeongbin Seo, Dongha Lee, Jinyoung Yeo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Many works have proposed methodologies for language model (LM) hallucination detection and reported seemingly strong performance. However, we argue that the reported performance to date reflects not only a model's genuine awareness of its internal information, but also awareness derived purely from question-side information (e.g., benchmark hacking). While benchmark hacking can be effective for boosting hallucination detection score on existing benchmarks, it does not generalize to out-of-domain settings and practical usage. Nevertheless, disentangling how much of a model's hallucination detection performance arises from question-side awareness is non-trivial. To address this, we propose a methodology for measuring this effect without requiring human labor, Approximate Question-side Effect (AQE). Our analysis using AQE reveals that existing hallucination detection methods rely heavily on benchmark hacking.

Reproductions