SOTAVerified

Object Hallucination

Papers

Showing 5171 of 71 papers

TitleStatusHype
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Simple Token-Level Confidence Improves Caption Correctness0
Effectiveness Assessment of Recent Large Vision-Language Models0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
``I've Seen Things You People Wouldn't Believe'': Hallucinating Entities in GuessWhat?!0
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Visual Instruction Bottleneck Tuning0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Deep Learning Approaches on Image Captioning: A Review0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination0
Data-augmented phrase-level alignment for mitigating object hallucination0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.