| Do Androids Know They're Only Dreaming of Electric Sheep? | Dec 28, 2023 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 | 0 |
| Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer | Feb 22, 2024 | Generative Question AnsweringHallucination | —Unverified | 0 | 0 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 | 0 |
| Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models | Dec 22, 2023 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination | Oct 22, 2024 | Hallucination | —Unverified | 0 | 0 |
| DREAM: On hallucinations in AI-generated content for nuclear medicine imaging | Jun 16, 2025 | DiagnosticHallucination | —Unverified | 0 | 0 |
| DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models | Mar 5, 2025 | HallucinationText Generation | —Unverified | 0 | 0 |
| Dual-View Data Hallucination with Semantic Relation Guidance for Few-Shot Image Recognition | Jan 13, 2024 | HallucinationNovel Concepts | —Unverified | 0 | 0 |