| Are Reasoning Models More Prone to Hallucination? | May 29, 2025 | Hallucination | —Unverified | 0 | 0 |
| A review of faithfulness metrics for hallucination assessment in Large Language Models | Dec 31, 2024 | BenchmarkingHallucination | —Unverified | 0 | 0 |
| ARGUS: Hallucination and Omission Evaluation in Video-LLMs | Jun 9, 2025 | DescriptiveForm | —Unverified | 0 | 0 |
| ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature | Jan 17, 2025 | HallucinationRetrieval | —Unverified | 0 | 0 |
| ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM | Jun 17, 2025 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs) | Feb 5, 2025 | HallucinationSpatial Reasoning | —Unverified | 0 | 0 |
| A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation | Jul 1, 2019 | HallucinationText Generation | —Unverified | 0 | 0 |
| Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucination | Jun 3, 2024 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| Aspect-Based Summarization with Self-Aspect Retrieval Enhanced Generation | Apr 17, 2025 | HallucinationIn-Context Learning | —Unverified | 0 | 0 |
| Assessing the use of Diffusion models for motion artifact correction in brain MRI | Feb 3, 2025 | DiagnosticHallucination | —Unverified | 0 | 0 |