| Improving RNN-Transducers with Acoustic LookAhead | Jul 11, 2023 | HallucinationSpeech-to-Text | —Unverified | 0 | 0 |
| Improving Scientific Hypothesis Generation with Knowledge Grounded Large Language Models | Nov 4, 2024 | Experimental DesignHallucination | —Unverified | 0 | 0 |
| Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning | Oct 7, 2023 | HallucinationIn-Context Learning | —Unverified | 0 | 0 |
| Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification | May 13, 2025 | HallucinationRAG | —Unverified | 0 | 0 |
| Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text | Aug 10, 2024 | Automatic Speech RecognitionHallucination | —Unverified | 0 | 0 |
| Incremental Scene Synthesis | Nov 29, 2018 | Autonomous NavigationHallucination | —Unverified | 0 | 0 |
| Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing Things | Jul 14, 2022 | HallucinationSensor Fusion | —Unverified | 0 | 0 |
| Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue | Dec 12, 2022 | HallucinationSentence | —Unverified | 0 | 0 |
| Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG | Sep 30, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| Insights from Verification: Training a Verilog Generation LLM with Reinforcement Learning with Testbench Feedback | Apr 22, 2025 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Insights into Classifying and Mitigating LLMs' Hallucinations | Nov 14, 2023 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| Instance-level Facial Attributes Transfer with Geometry-Aware Flow | Nov 30, 2018 | AttributeHallucination | —Unverified | 0 | 0 |
| Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs | Mar 26, 2025 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering | Aug 23, 2024 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States | Jun 17, 2024 | BenchmarkingContrastive Learning | —Unverified | 0 | 0 |
| Interpretable Zero-shot Learning with Infinite Class Concepts | May 6, 2025 | HallucinationZero-Shot Learning | —Unverified | 0 | 0 |
| Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate | Jul 30, 2024 | Hallucination | —Unverified | 0 | 0 |
| Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation | Nov 11, 2024 | HallucinationInformation Retrieval | —Unverified | 0 | 0 |
| Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation | Jun 8, 2024 | Abstractive Text SummarizationDialogue Generation | —Unverified | 0 | 0 |
| Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models | Oct 25, 2024 | HallucinationPrompt Engineering | —Unverified | 0 | 0 |
| IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing | Oct 22, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection | Mar 12, 2025 | Decision MakingFake News Detection | —Unverified | 0 | 0 |
| Is Your Text-to-Image Model Robust to Caption Noise? | Dec 27, 2024 | DescriptiveHallucination | —Unverified | 0 | 0 |
| Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning | Oct 16, 2024 | Contrastive Learninggraph construction | —Unverified | 0 | 0 |
| It's About Time: Incorporating Temporality in Retrieval Augmented Language Models | Jan 24, 2024 | Few-Shot LearningHallucination | —Unverified | 0 | 0 |