| Can Structured Data Reduce Epistemic Uncertainty? | Oct 14, 2024 | HallucinationRetrieval | —Unverified | 0 |
| AGA-GAN: Attribute Guided Attention Generative Adversarial Network with U-Net for Face Hallucination | Nov 20, 2021 | AttributeFace Hallucination | —Unverified | 0 |
| Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study | Nov 18, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation? | Apr 29, 2025 | HallucinationMachine Translation | —Unverified | 0 |
| Applying RLAIF for Code Generation with API-usage in Lightweight LLMs | Jun 28, 2024 | Code GenerationHallucination | —Unverified | 0 |
| Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models | Aug 2, 2024 | Hallucination | —Unverified | 0 |
| Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning | Aug 23, 2024 | HallucinationPrompt Engineering | —Unverified | 0 |
| Applications of Large Language Model Reasoning in Feature Generation | Mar 15, 2025 | Computational EfficiencyDomain Adaptation | —Unverified | 0 |
| Can Large Language Models Play Games? A Case Study of A Self-Play Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 |
| Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering | Oct 10, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |