On the Fundamental Impossibility of Hallucination Control in Large Language Models
Michał P. Karpowicz
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper explains why it is impossible to create large language models that do not hallucinate and what are the trade-offs we should be looking for. It presents a formal impossibility theorem demonstrating that no inference mechanism can simultaneously satisfy four fundamental properties: truthful (non-hallucinatory) generation, semantic information conservation, relevant knowledge revelation, and knowledge-constrained optimality. By modeling LLM inference as an auction of ideas where neural components compete to contribute to responses, we prove the impossibility using the Green-Laffont theorem. That mathematical framework provides a rigorous foundation for understanding the nature of inference process, with implications for model architecture, training objectives, and evaluation methods.