SOTAVerified

Object Hallucination

Papers

Showing 5160 of 71 papers

TitleStatusHype
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Simple Token-Level Confidence Improves Caption Correctness0
Effectiveness Assessment of Recent Large Vision-Language Models0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
``I've Seen Things You People Wouldn't Believe'': Hallucinating Entities in GuessWhat?!0
Show:102550
← PrevPage 6 of 8Next →

No leaderboard results yet.