| Enhancing Guardrails for Safe and Secure Healthcare AI | Sep 25, 2024 | HallucinationMisinformation | —Unverified | 0 | 0 |
| Enhancing Hallucination Detection through Noise Injection | Feb 6, 2025 | Hallucination | —Unverified | 0 | 0 |
| Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine | Mar 18, 2025 | HallucinationRAG | —Unverified | 0 | 0 |
| Enhancing Mathematical Reasoning in Large Language Models with Self-Consistency-Based Hallucination Detection | Apr 13, 2025 | Answer SelectionAutomated Theorem Proving | —Unverified | 0 | 0 |
| Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models | Nov 25, 2024 | Hallucination | —Unverified | 0 | 0 |
| From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information | Jan 31, 2024 | Hallucinationobject-detection | —Unverified | 0 | 0 |
| Enhancing RAG with Active Learning on Conversation Records: Reject Incapables and Answer Capables | Feb 13, 2025 | Active LearningHallucination | —Unverified | 0 | 0 |
| Enhancing Retrieval Processes for Language Generation with Augmented Queries | Feb 6, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| Enhancing Scientific Reproducibility Through Automated BioCompute Object Creation Using Retrieval-Augmented Generation from Publications | Sep 23, 2024 | HallucinationLong-Context Understanding | —Unverified | 0 | 0 |
| Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection | Sep 24, 2024 | HallucinationSemantic Parsing | —Unverified | 0 | 0 |