| Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study | Nov 18, 2024 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine | Mar 18, 2025 | HallucinationRAG | —Unverified | 0 | 0 |
| Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation? | Apr 29, 2025 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| Applying RLAIF for Code Generation with API-usage in Lightweight LLMs | Jun 28, 2024 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Enhancing Hallucination Detection through Noise Injection | Feb 6, 2025 | Hallucination | —Unverified | 0 | 0 |
| Enhancing Guardrails for Safe and Secure Healthcare AI | Sep 25, 2024 | HallucinationMisinformation | —Unverified | 0 | 0 |
| Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models | Nov 15, 2023 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning | Aug 23, 2024 | HallucinationPrompt Engineering | —Unverified | 0 | 0 |
| Applications of Large Language Model Reasoning in Feature Generation | Mar 15, 2025 | Computational EfficiencyDomain Adaptation | —Unverified | 0 | 0 |
| Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation | Feb 20, 2024 | HallucinationMachine Translation | —Unverified | 0 | 0 |