SOTAVerified

Object Hallucination

Papers

Showing 4150 of 71 papers

TitleStatusHype
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Instruction Makes a DifferenceCode0
MoE-LLaVA: Mixture of Experts for Large Vision-Language ModelsCode7
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
Show:102550
← PrevPage 5 of 8Next →

No leaderboard results yet.