| Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink | Jan 25, 2025 | HallucinationText Generation | —Unverified | 0 |
| Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities | Jan 25, 2025 | HallucinationObject | —Unverified | 0 |
| Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing | Jan 24, 2025 | Caption GenerationDataset Generation | —Unverified | 0 |
| Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge Graph | Jan 24, 2025 | Community DetectionHallucination | CodeCode Available | 2 |
| Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs | Jan 23, 2025 | DiagnosticFew-Shot Learning | —Unverified | 0 |
| Hallucinations Can Improve Large Language Models in Drug Discovery | Jan 23, 2025 | Drug DiscoveryHallucination | —Unverified | 0 |
| RAG-Reward: Optimizing RAG with Reward Modeling and RLHF | Jan 22, 2025 | BenchmarkingHallucination | —Unverified | 0 |
| OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language Models | Jan 22, 2025 | Hallucination | CodeCode Available | 0 |
| PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language Model | Jan 21, 2025 | HallucinationImage Captioning | CodeCode Available | 1 |
| Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering | Jan 20, 2025 | Answer GenerationComputational Efficiency | —Unverified | 0 |