SOTAVerified

LLM Hallucination Reasoning with Zero-shot Knowledge Test

2024-11-14Unverified0· sign in to hype

Seongmin Lee, Hsiang Hsu, Chun-Fu Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant challenges for their practical applications. Most existing detection methods rely on external knowledge, LLM fine-tuning, or hallucination-labeled datasets, and they do not distinguish between different types of hallucinations, which are crucial for improving detection performance. We introduce a new task, Hallucination Reasoning, which classifies LLM-generated text into one of three categories: aligned, misaligned, and fabricated. Our novel zero-shot method assesses whether LLM has enough knowledge about a given prompt and text. Our experiments conducted on new datasets demonstrate the effectiveness of our method in hallucination reasoning and underscore its importance for enhancing detection performance.

Tasks

Reproductions