SOTAVerified

Object Hallucination

Papers

Showing 150 of 71 papers

TitleStatusHype
Revisit What You See: Disclose Language Prior in Vision Tokens for Efficient Guided Decoding of LVLMsCode1
SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive DecodingCode0
Reducing Object Hallucination in Large Audio-Language Models via Audio-Aware Decoding0
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
Visual Instruction Bottleneck Tuning0
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models0
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive FinetuningCode1
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-InterventionCode1
Seeing What's Not There: Spurious Correlation in Multimodal LLMs0
OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
Vision-Encoders (Already) Know What They See: Mitigating Object Hallucination via Simple Fine-Grained CLIPScoreCode0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language ModelsCode0
Extract Free Dense Misalignment from CLIPCode1
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
Unified Triplet-Level Hallucination Evaluation for Large Vision-Language ModelsCode0
Mitigating Object Hallucination via Concentric Causal AttentionCode2
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided DecodingCode1
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination0
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
Multi-Object Hallucination in Vision-Language ModelsCode1
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language ModelsCode2
Data-augmented phrase-level alignment for mitigating object hallucination0
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models0
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V TrustworthinessCode11
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language TasksCode0
ALOHa: A New Measure for Hallucination in Captioning Models0
Effectiveness Assessment of Recent Large Vision-Language Models0
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Instruction Makes a DifferenceCode0
MoE-LLaVA: Mixture of Experts for Large Vision-Language ModelsCode7
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.