| Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG | Sep 30, 2024 | HallucinationRAG | —Unverified | 0 |
| From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models | Oct 9, 2024 | AttributeHallucination | —Unverified | 0 |
| A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs | Apr 15, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models | Mar 18, 2025 | HallucinationPhilosophy | —Unverified | 0 |
| Comparing Computational Architectures for Automated Journalism | Oct 8, 2022 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models | Nov 27, 2024 | AttributeHallucination | —Unverified | 0 |
| From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs | Dec 24, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents | Nov 12, 2024 | General KnowledgeHallucination | —Unverified | 0 |