SOTAVerified

Object Hallucination

Papers

Showing 1120 of 71 papers

TitleStatusHype
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Evaluating Object Hallucination in Large Vision-Language ModelsCode2
Revisit What You See: Disclose Language Prior in Vision Tokens for Efficient Guided Decoding of LVLMsCode1
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive FinetuningCode1
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-InterventionCode1
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
Extract Free Dense Misalignment from CLIPCode1
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided DecodingCode1
Multi-Object Hallucination in Vision-Language ModelsCode1
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
Show:102550
← PrevPage 2 of 8Next →

No leaderboard results yet.