SOTAVerified

Object Hallucination

Papers

Showing 5171 of 71 papers

TitleStatusHype
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
Data-augmented phrase-level alignment for mitigating object hallucination0
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models0
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language TasksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Effectiveness Assessment of Recent Large Vision-Language Models0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Instruction Makes a DifferenceCode0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Simple Token-Level Confidence Improves Caption Correctness0
Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-trainingCode0
Deep Learning Approaches on Image Captioning: A Review0
Consensus Graph Representation Learning for Better Grounded Image Captioning0
Relational Graph Learning for Grounded Video Description Generation0
``I've Seen Things You People Wouldn't Believe'': Hallucinating Entities in GuessWhat?!0
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning ModelsCode0
Object Hallucination in Image CaptioningCode0
Show:102550
← PrevPage 3 of 3Next →

No leaderboard results yet.