| Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models | Dec 22, 2023 | HallucinationMachine Translation | —Unverified | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 |
| MAO: A Framework for Process Model Generation with Multi-Agent Orchestration | Aug 4, 2024 | Hallucinationsoftware testing | —Unverified | 0 |
| ADeLA: Automatic Dense Labeling With Attention for Viewpoint Shift in Semantic Segmentation | Jan 1, 2022 | Domain AdaptationHallucination | —Unverified | 0 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 |
| Blind Image Super-Resolution with Spatial Context Hallucination | Sep 25, 2020 | Blind Super-ResolutionDeblurring | —Unverified | 0 |
| An Evolutionary Large Language Model for Hallucination Mitigation | Dec 3, 2024 | Dataset GenerationHallucination | —Unverified | 0 |
| Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer | Feb 22, 2024 | Generative Question AnsweringHallucination | —Unverified | 0 |
| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models | Apr 30, 2025 | HallucinationObject | —Unverified | 0 |