| LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought | May 9, 2024 | HallucinationMath | —Unverified | 0 | 0 |
| LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation | Mar 16, 2025 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| LLMs Prompted for Graphs: Hallucinations and Generative Capabilities | Aug 30, 2024 | DiversityHallucination | —Unverified | 0 | 0 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 | 0 |
| LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries | Jan 3, 2025 | Hallucinationzero-shot-classification | —Unverified | 0 | 0 |
| LLMs Will Always Hallucinate, and We Need to Live With This | Sep 9, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation | Jul 1, 2024 | HallucinationUncertainty Quantification | —Unverified | 0 | 0 |
| LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models | Oct 2, 2024 | Hallucination | —Unverified | 0 | 0 |
| Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs | Apr 30, 2025 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs | May 22, 2025 | Hallucination | —Unverified | 0 | 0 |