| A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| Distillation of encoder-decoder transformers for sequence labelling | Feb 10, 2023 | DecoderFew-Shot Learning | —Unverified | 0 |
| Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models | Oct 25, 2024 | HallucinationPrompt Engineering | —Unverified | 0 |
| From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs | Apr 15, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing | Oct 22, 2024 | HallucinationRAG | —Unverified | 0 |
| From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models | Mar 18, 2025 | HallucinationPhilosophy | —Unverified | 0 |
| Comparing Computational Architectures for Automated Journalism | Oct 8, 2022 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs | Dec 24, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents | Nov 12, 2024 | General KnowledgeHallucination | —Unverified | 0 |