| HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMs | Apr 13, 2025 | HallucinationMisinformation | CodeCode Available | 0 | 5 |
| Handling Ontology Gaps in Semantic Parsing | Jun 27, 2024 | HallucinationQuestion Answering | CodeCode Available | 0 | 5 |
| SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection | Apr 9, 2024 | Hallucination | CodeCode Available | 0 | 5 |
| Enhancing Retrieval Processes for Language Generation with Augmented Queries | Feb 6, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| Enhancing RAG with Active Learning on Conversation Records: Reject Incapables and Answer Capables | Feb 13, 2025 | Active LearningHallucination | —Unverified | 0 | 0 |
| Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation | Apr 18, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information | Jan 31, 2024 | Hallucinationobject-detection | —Unverified | 0 | 0 |
| Can Structured Data Reduce Epistemic Uncertainty? | Oct 14, 2024 | HallucinationRetrieval | —Unverified | 0 | 0 |
| Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models | Nov 25, 2024 | Hallucination | —Unverified | 0 | 0 |
| Enhancing Mathematical Reasoning in Large Language Models with Self-Consistency-Based Hallucination Detection | Apr 13, 2025 | Answer SelectionAutomated Theorem Proving | —Unverified | 0 | 0 |
| Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study | Nov 18, 2024 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine | Mar 18, 2025 | HallucinationRAG | —Unverified | 0 | 0 |
| Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation? | Apr 29, 2025 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| Applying RLAIF for Code Generation with API-usage in Lightweight LLMs | Jun 28, 2024 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Enhancing Hallucination Detection through Noise Injection | Feb 6, 2025 | Hallucination | —Unverified | 0 | 0 |
| Enhancing Guardrails for Safe and Secure Healthcare AI | Sep 25, 2024 | HallucinationMisinformation | —Unverified | 0 | 0 |
| Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models | Nov 15, 2023 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning | Aug 23, 2024 | HallucinationPrompt Engineering | —Unverified | 0 | 0 |
| Applications of Large Language Model Reasoning in Feature Generation | Mar 15, 2025 | Computational EfficiencyDomain Adaptation | —Unverified | 0 | 0 |
| Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation | Feb 20, 2024 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| Enhanced document retrieval with topic embeddings | Aug 19, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| Can Large Language Models Play Games? A Case Study of A Self-Play Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Endowing Embodied Agents with Spatial Reasoning Capabilities for Vision-and-Language Navigation | Apr 9, 2025 | HallucinationSpatial Reasoning | —Unverified | 0 | 0 |
| Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering | Oct 10, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 | 0 |
| A Perspective for Adapting Generalist AI to Specialized Medical AI Applications and Their Challenges | Oct 28, 2024 | Drug DiscoveryHallucination | —Unverified | 0 | 0 |