SOTAVerified

Attention Head Embeddings with Trainable Deep Kernels for Hallucination Detection in LLMs

2025-06-11Unverified0· sign in to hype

Rodion Oblovatny, Alexandra Bazarova, Alexey Zaytsev

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a novel approach for detecting hallucinations in large language models (LLMs) by analyzing the probabilistic divergence between prompt and response hidden-state distributions. Counterintuitively, we find that hallucinated responses exhibit smaller deviations from their prompts compared to grounded responses, suggesting that hallucinations often arise from superficial rephrasing rather than substantive reasoning. Leveraging this insight, we propose a model-intrinsic detection method that uses distributional distances as principled hallucination scores, eliminating the need for external knowledge or auxiliary models. To enhance sensitivity, we employ deep learnable kernels that automatically adapt to capture nuanced geometric differences between distributions. Our approach outperforms existing baselines, demonstrating state-of-the-art performance on several benchmarks. The method remains competitive even without kernel training, offering a robust, scalable solution for hallucination detection.

Tasks

Reproductions