| Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks | Jan 19, 2025 | AI AgentHallucination | CodeCode Available | 0 |
| ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature | Jan 17, 2025 | HallucinationRetrieval | —Unverified | 0 |
| Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models | Jan 17, 2025 | Hallucination | —Unverified | 0 |
| FRAG: A Flexible Modular Framework for Retrieval-Augmented Generation based on Knowledge Graphs | Jan 17, 2025 | HallucinationKnowledge Graphs | —Unverified | 0 |
| A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy | Jan 16, 2025 | HallucinationSurvey | —Unverified | 0 |
| ChartInsighter: An Approach for Mitigating Hallucination in Time-series Chart Summary Generation with A Benchmark Dataset | Jan 16, 2025 | HallucinationSentence | CodeCode Available | 1 |
| Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key | Jan 16, 2025 | 16kHallucination | CodeCode Available | 2 |
| Knowledge Graph-based Retrieval-Augmented Generation for Schema Matching | Jan 15, 2025 | HallucinationKnowledge Graphs | CodeCode Available | 1 |
| Multimodal LLMs Can Reason about Aesthetics in Zero-Shot | Jan 15, 2025 | BenchmarkingHallucination | CodeCode Available | 1 |
| HALoGEN: Fantastic LLM Hallucinations and Where to Find Them | Jan 14, 2025 | HallucinationWorld Knowledge | —Unverified | 0 |