| A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning | Oct 7, 2023 | HallucinationIn-Context Learning | —Unverified | 0 |
| Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification | May 13, 2025 | HallucinationRAG | —Unverified | 0 |
| From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs | Apr 15, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models | Mar 18, 2025 | HallucinationPhilosophy | —Unverified | 0 |
| Comparing Computational Architectures for Automated Journalism | Oct 8, 2022 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| Incremental Scene Synthesis | Nov 29, 2018 | Autonomous NavigationHallucination | —Unverified | 0 |
| From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs | Dec 24, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents | Nov 12, 2024 | General KnowledgeHallucination | —Unverified | 0 |