SOTAVerified

Object Hallucination

Papers

Showing 5160 of 71 papers

TitleStatusHype
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
Data-augmented phrase-level alignment for mitigating object hallucination0
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models0
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language TasksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Effectiveness Assessment of Recent Large Vision-Language Models0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Show:102550
← PrevPage 6 of 8Next →

No leaderboard results yet.