| From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs | Apr 15, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering | Aug 23, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States | Jun 17, 2024 | BenchmarkingContrastive Learning | —Unverified | 0 |
| Interpretable Zero-shot Learning with Infinite Class Concepts | May 6, 2025 | HallucinationZero-Shot Learning | —Unverified | 0 |
| From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models | Mar 18, 2025 | HallucinationPhilosophy | —Unverified | 0 |
| Comparing Computational Architectures for Automated Journalism | Oct 8, 2022 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs | Dec 24, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents | Nov 12, 2024 | General KnowledgeHallucination | —Unverified | 0 |
| Comparative Study of Domain Driven Terms Extraction Using Large Language Models | Apr 2, 2024 | Document SummarizationHallucination | —Unverified | 0 |
| Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy Searcher | Oct 19, 2023 | HallucinationInformation Retrieval | —Unverified | 0 |