SOTAVerified

Hallucination

Papers

Showing 101125 of 1816 papers

TitleStatusHype
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
MeMemo: On-device Retrieval Augmentation for Private and Personalized Text GenerationCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local AttentionCode2
Less is More: Mitigating Multimodal Hallucination from an EOS Decision PerspectiveCode2
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference AlignmentCode2
Differential TransformerCode2
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination MitigationCode2
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step QuestionsCode2
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual CheckingCode2
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System CollaborationCode2
One-for-More: Continual Diffusion Model for Anomaly DetectionCode2
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language ModelsCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervisionCode2
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine TranslationCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategiesCode2
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
Dynamic Parametric Retrieval Augmented Generation for Test-time Knowledge EnhancementCode2
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language ModelsCode2
Show:102550
← PrevPage 5 of 73Next →

No leaderboard results yet.