| Are Reasoning Models More Prone to Hallucination? | May 29, 2025 | Hallucination | —Unverified | 0 |
| Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information | May 29, 2025 | Hallucination | CodeCode Available | 0 |
| Evaluation Hallucination in Multi-Round Incomplete Information Lateral-Driven Reasoning Tasks | May 28, 2025 | Hallucination | —Unverified | 0 |
| SkewRoute: Training-Free LLM Routing for Knowledge Graph Retrieval-Augmented Generation via Score Skewness of Retrieved Context | May 28, 2025 | HallucinationRAG | —Unverified | 0 |
| CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language Models | May 27, 2025 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction | May 27, 2025 | Domain AdaptationHallucination | —Unverified | 0 |
| Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration | May 27, 2025 | HallucinationVisual Grounding | —Unverified | 0 |
| R3-RAG: Learning Step-by-Step Reasoning and Retrieval for LLMs via Reinforcement Learning | May 26, 2025 | HallucinationRAG | CodeCode Available | 1 |
| Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models | May 26, 2025 | HallucinationObject Hallucination | CodeCode Available | 0 |
| Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models | May 26, 2025 | DisentanglementHallucination | CodeCode Available | 0 |