| The Hallucination Tax of Reinforcement Finetuning | May 20, 2025 | HallucinationMath | —Unverified | 0 |
| The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances | Apr 1, 2025 | Hallucination | —Unverified | 0 |
| The Impact of Large Language Models on Task Automation in Manufacturing Services | May 14, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs | Oct 2, 2024 | BenchmarkingHallucination | —Unverified | 0 |
| The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination | Feb 22, 2025 | HallucinationText Generation | —Unverified | 0 |
| The Need for Guardrails with Large Language Models in Medical Safety-Critical Settings: An Artificial Intelligence Application in the Pharmacovigilance Ecosystem | Jul 1, 2024 | HallucinationPharmacovigilance | —Unverified | 0 |
| Theory of Hallucinations based on Equivariance | Dec 22, 2023 | Hallucination | —Unverified | 0 |
| The Pitfalls of Defining Hallucination | Jan 15, 2024 | Hallucinationnlg evaluation | —Unverified | 0 |
| What Makes for Good Image Captions? | May 1, 2024 | HallucinationImage Captioning | —Unverified | 0 |
| The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting | Feb 21, 2025 | HallucinationObject | —Unverified | 0 |