| TruthFlow: Truthful LLM Generation via Representation Flow Correction | Feb 6, 2025 | HallucinationTruthfulQA | —Unverified | 0 |
| A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs) | Feb 5, 2025 | HallucinationSpatial Reasoning | —Unverified | 0 |
| Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration | Feb 4, 2025 | AttributeHallucination | —Unverified | 0 |
| Eliciting Language Model Behaviors with Investigator Agents | Feb 3, 2025 | Bayesian InferenceHallucination | —Unverified | 0 |
| MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation | Feb 3, 2025 | BenchmarkingFairness | —Unverified | 0 |
| SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models | Feb 3, 2025 | Hallucination | —Unverified | 0 |
| Assessing the use of Diffusion models for motion artifact correction in brain MRI | Feb 3, 2025 | DiagnosticHallucination | —Unverified | 0 |
| MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction | Feb 2, 2025 | HallucinationToken Reduction | —Unverified | 0 |
| Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs | Jan 31, 2025 | HallucinationObject Hallucination | —Unverified | 0 |
| Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities | Jan 31, 2025 | Code GenerationHallucination | —Unverified | 0 |
| Differentially Private Steering for Large Language Model Alignment | Jan 30, 2025 | HallucinationInference Attack | CodeCode Available | 0 |
| Few-Shot Optimized Framework for Hallucination Detection in Resource-Limited NLP Systems | Jan 28, 2025 | Ensemble LearningHallucination | —Unverified | 0 |
| Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization | Jan 28, 2025 | DecoderHallucination | —Unverified | 0 |
| Open-Source Retrieval Augmented Generation Framework for Retrieving Accurate Medication Insights from Formularies for African Healthcare Workers | Jan 28, 2025 | ChatbotDecision Making | —Unverified | 0 |
| Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis | Jan 26, 2025 | ArticlesHallucination | —Unverified | 0 |
| Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink | Jan 25, 2025 | HallucinationText Generation | —Unverified | 0 |
| Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities | Jan 25, 2025 | HallucinationObject | —Unverified | 0 |
| Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing | Jan 24, 2025 | Caption GenerationDataset Generation | —Unverified | 0 |
| Hallucinations Can Improve Large Language Models in Drug Discovery | Jan 23, 2025 | Drug DiscoveryHallucination | —Unverified | 0 |
| Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs | Jan 23, 2025 | DiagnosticFew-Shot Learning | —Unverified | 0 |
| OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language Models | Jan 22, 2025 | Hallucination | CodeCode Available | 0 |
| RAG-Reward: Optimizing RAG with Reward Modeling and RLHF | Jan 22, 2025 | BenchmarkingHallucination | —Unverified | 0 |
| Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering | Jan 20, 2025 | Answer GenerationComputational Efficiency | —Unverified | 0 |
| Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks | Jan 19, 2025 | AI AgentHallucination | CodeCode Available | 0 |
| Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models | Jan 17, 2025 | Hallucination | —Unverified | 0 |