| A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction | May 27, 2025 | Domain AdaptationHallucination | —Unverified | 0 |
| Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models | May 26, 2025 | HallucinationObject Hallucination | CodeCode Available | 0 |
| Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models | May 26, 2025 | HallucinationMME | —Unverified | 0 |
| Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision | May 26, 2025 | HallucinationMath | CodeCode Available | 0 |
| Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs | May 26, 2025 | Hallucination | —Unverified | 0 |
| Attention! You Vision Language Model Could Be Maliciously Manipulated | May 26, 2025 | Decision MakingHallucination | —Unverified | 0 |
| Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs | May 26, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models | May 26, 2025 | DisentanglementHallucination | CodeCode Available | 0 |
| GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling | May 25, 2025 | DecoderHallucination | —Unverified | 0 |
| LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models | May 25, 2025 | Hallucinationknowledge editing | —Unverified | 0 |