| Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink | Jan 25, 2025 | HallucinationText Generation | —Unverified | 0 |
| Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities | Jan 25, 2025 | HallucinationObject | —Unverified | 0 |
| Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing | Jan 24, 2025 | Caption GenerationDataset Generation | —Unverified | 0 |
| Hallucinations Can Improve Large Language Models in Drug Discovery | Jan 23, 2025 | Drug DiscoveryHallucination | —Unverified | 0 |
| Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs | Jan 23, 2025 | DiagnosticFew-Shot Learning | —Unverified | 0 |
| OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language Models | Jan 22, 2025 | Hallucination | CodeCode Available | 0 |
| RAG-Reward: Optimizing RAG with Reward Modeling and RLHF | Jan 22, 2025 | BenchmarkingHallucination | —Unverified | 0 |
| Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering | Jan 20, 2025 | Answer GenerationComputational Efficiency | —Unverified | 0 |
| Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks | Jan 19, 2025 | AI AgentHallucination | CodeCode Available | 0 |
| Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models | Jan 17, 2025 | Hallucination | —Unverified | 0 |