| Mitigating Object Hallucinations via Sentence-Level Early Intervention | Jul 16, 2025 | HallucinationMM-Vet | CodeCode Available | 1 |
| ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way | Jul 11, 2025 | Depth EstimationHallucination | —Unverified | 0 |
| UQLM: A Python Package for Uncertainty Quantification in Large Language Models | Jul 8, 2025 | HallucinationUncertainty Quantification | CodeCode Available | 5 |
| ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding | Jul 7, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning | Jul 7, 2025 | HallucinationLarge Language Model | —Unverified | 0 |
| The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems | Jul 2, 2025 | Explanation GenerationHallucination | —Unverified | 0 |
| GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models | Jul 1, 2025 | HallucinationManagement | CodeCode Available | 0 |
| Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits Calibration | Jun 26, 2025 | HallucinationText Generation | CodeCode Available | 0 |
| HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation | Jun 26, 2025 | counterfactualCounterfactual Reasoning | —Unverified | 0 |
| Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models | Jun 25, 2025 | document understandingHallucination | —Unverified | 0 |