SOTAVerified

Maximum Hallucination Standards for Domain-Specific Large Language Models

2025-03-07Unverified0· sign in to hype

Tingmingke Lu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large language models (LLMs) often generate inaccurate yet credible-sounding content, known as hallucinations. This inherent feature of LLMs poses significant risks, especially in critical domains. I analyze LLMs as a new class of engineering products, treating hallucinations as a product attribute. I demonstrate that, in the presence of imperfect awareness of LLM hallucinations and misinformation externalities, net welfare improves when the maximum acceptable level of LLM hallucinations is designed to vary with two domain-specific factors: the willingness to pay for reduced LLM hallucinations and the marginal damage associated with misinformation.

Tasks

Reproductions