SOTAVerified

Object Hallucination

Papers

Showing 125 of 71 papers

TitleStatusHype
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V TrustworthinessCode11
MoE-LLaVA: Mixture of Experts for Large Vision-Language ModelsCode7
Ferret: Refer and Ground Anything Anywhere at Any GranularityCode5
Mitigating Object Hallucination via Concentric Causal AttentionCode2
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
TinyLVLM-eHub: Towards Comprehensive and Efficient Evaluation for Large Vision-Language ModelsCode2
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
Evaluating Object Hallucination in Large Vision-Language ModelsCode2
Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language ModelsCode2
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided DecodingCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive FinetuningCode1
Extract Free Dense Misalignment from CLIPCode1
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
HyperPocket: Generative Point Cloud CompletionCode1
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Let there be a clock on the beach: Reducing Object Hallucination in Image CaptioningCode1
Multi-Object Hallucination in Vision-Language ModelsCode1
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.