SOTAVerified

Hallucination

Papers

Showing 311320 of 1816 papers

TitleStatusHype
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
Visual Hallucinations of Multi-modal Large Language ModelsCode1
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue SummarizationCode1
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Uncertainty Quantification for In-Context Learning of Large Language ModelsCode1
Into the Unknown: Self-Learning Large Language ModelsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task AmbiguityCode1
INSIDE: LLMs' Internal States Retain the Power of Hallucination DetectionCode1
Show:102550
← PrevPage 32 of 182Next →

No leaderboard results yet.