| On Mitigating Code LLM Hallucinations with API Documentation | Jul 13, 2024 | Hallucinationvalid | —Unverified | 0 | 0 |
| On the Audio Hallucinations in Large Audio-Video Language Models | Jan 18, 2024 | HallucinationSentence | —Unverified | 0 | 0 |
| On the Capacity of Citation Generation by Large Language Models | Oct 15, 2024 | AttributeHallucination | —Unverified | 0 | 0 |
| On the Cost and Benefits of Training Context with Utterance or Full Conversation Training: A Comparative Stud | May 12, 2025 | GPUHallucination | —Unverified | 0 | 0 |
| On the Fundamental Impossibility of Hallucination Control in Large Language Models | Jun 4, 2025 | Hallucination | —Unverified | 0 | 0 |
| On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation | Feb 26, 2025 | Cross-Modal RetrievalHallucination | —Unverified | 0 | 0 |
| On the Limitations of Large Language Models (LLMs): False Attribution | Apr 6, 2024 | Author AttributionHallucination | —Unverified | 0 | 0 |
| On the Limits of Language Generation: Trade-Offs Between Hallucination and Mode Collapse | Nov 14, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? | Nov 16, 2021 | Hallucination | —Unverified | 0 | 0 |
| Predicting Text Preference Via Structured Comparative Reasoning | Nov 14, 2023 | HallucinationRetrieval | —Unverified | 0 | 0 |