SOTAVerified

Object Hallucination

Papers

Showing 2650 of 71 papers

TitleStatusHype
Analyzing and Mitigating Object Hallucination in Large Vision-Language ModelsCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Transferable Decoding with Visual Entities for Zero-Shot Image CaptioningCode1
Let there be a clock on the beach: Reducing Object Hallucination in Image CaptioningCode1
HyperPocket: Generative Point Cloud CompletionCode1
SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive DecodingCode0
Reducing Object Hallucination in Large Audio-Language Models via Audio-Aware Decoding0
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
Visual Instruction Bottleneck Tuning0
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models0
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
Seeing What's Not There: Spurious Correlation in Multimodal LLMs0
OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Vision-Encoders (Already) Know What They See: Mitigating Object Hallucination via Simple Fine-Grained CLIPScoreCode0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language ModelsCode0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
Unified Triplet-Level Hallucination Evaluation for Large Vision-Language ModelsCode0
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination0
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.