SOTAVerified

Object Hallucination

Papers

Showing 3140 of 71 papers

TitleStatusHype
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language ModelsCode2
Data-augmented phrase-level alignment for mitigating object hallucination0
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models0
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V TrustworthinessCode11
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language TasksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Effectiveness Assessment of Recent Large Vision-Language Models0
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
Show:102550
← PrevPage 4 of 8Next →

No leaderboard results yet.