| Interpretable Zero-shot Learning with Infinite Class Concepts | May 6, 2025 | HallucinationZero-Shot Learning | —Unverified | 0 | 0 |
| Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate | Jul 30, 2024 | Hallucination | —Unverified | 0 | 0 |
| Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation | Nov 11, 2024 | HallucinationInformation Retrieval | —Unverified | 0 | 0 |
| Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation | Jun 8, 2024 | Abstractive Text SummarizationDialogue Generation | —Unverified | 0 | 0 |
| Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models | Oct 25, 2024 | HallucinationPrompt Engineering | —Unverified | 0 | 0 |
| IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing | Oct 22, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection | Mar 12, 2025 | Decision MakingFake News Detection | —Unverified | 0 | 0 |
| Is Your Text-to-Image Model Robust to Caption Noise? | Dec 27, 2024 | DescriptiveHallucination | —Unverified | 0 | 0 |
| Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning | Oct 16, 2024 | Contrastive Learninggraph construction | —Unverified | 0 | 0 |
| It's About Time: Incorporating Temporality in Retrieval Augmented Language Models | Jan 24, 2024 | Few-Shot LearningHallucination | —Unverified | 0 | 0 |