| A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction | May 27, 2025 | Domain AdaptationHallucination | —Unverified | 0 |
| Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration | May 27, 2025 | HallucinationVisual Grounding | —Unverified | 0 |
| R3-RAG: Learning Step-by-Step Reasoning and Retrieval for LLMs via Reinforcement Learning | May 26, 2025 | HallucinationRAG | CodeCode Available | 1 |
| Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models | May 26, 2025 | HallucinationObject Hallucination | CodeCode Available | 0 |
| Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models | May 26, 2025 | DisentanglementHallucination | CodeCode Available | 0 |
| Attention! You Vision Language Model Could Be Maliciously Manipulated | May 26, 2025 | Decision MakingHallucination | —Unverified | 0 |
| Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision | May 26, 2025 | HallucinationMath | CodeCode Available | 0 |
| Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models | May 26, 2025 | HallucinationMME | —Unverified | 0 |
| Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration | May 26, 2025 | Domain GeneralizationHallucination | CodeCode Available | 2 |
| Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs | May 26, 2025 | Hallucination | —Unverified | 0 |
| Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs | May 26, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models | May 25, 2025 | Hallucinationknowledge editing | —Unverified | 0 |
| CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models | May 25, 2025 | Hallucination | CodeCode Available | 0 |
| GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling | May 25, 2025 | DecoderHallucination | —Unverified | 0 |
| Removal of Hallucination on Hallucination: Debate-Augmented RAG | May 24, 2025 | HallucinationRAG | CodeCode Available | 1 |
| MedScore: Factuality Evaluation of Free-Form Medical Answers | May 24, 2025 | FormHallucination | CodeCode Available | 0 |
| More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models | May 23, 2025 | DiagnosticHallucination | —Unverified | 0 |
| Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection | May 23, 2025 | Fact CheckingHallucination | —Unverified | 0 |
| keepitsimple at SemEval-2025 Task 3: LLM-Uncertainty based Approach for Multilingual Hallucination Span Detection | May 23, 2025 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| LLM-Powered Agents for Navigating Venice's Historical Cadastre | May 22, 2025 | HallucinationNatural Language Queries | —Unverified | 0 |
| Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems | May 22, 2025 | Adversarial AttackHallucination | —Unverified | 0 |
| Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs | May 22, 2025 | HallucinationTruthfulQA | —Unverified | 0 |
| Mitigating Hallucinations in Vision-Language Models through Image-Guided Head Suppression | May 22, 2025 | HallucinationImage Description | CodeCode Available | 1 |
| Steering LVLMs via Sparse Autoencoder for Hallucination Mitigation | May 22, 2025 | HallucinationImage Captioning | —Unverified | 0 |
| Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding | May 22, 2025 | Causal InferenceHallucination | —Unverified | 0 |