SOTAVerified

Hallucination

Papers

Showing 161170 of 1816 papers

TitleStatusHype
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging casesCode2
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Deep Learning-based Face Super-Resolution: A SurveyCode1
Deficiency-Aware Masked Transformer for Video InpaintingCode1
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionCode1
Adversarial Feature Hallucination Networks for Few-Shot LearningCode1
DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image PerceptionCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
Show:102550
← PrevPage 17 of 182Next →

No leaderboard results yet.