| HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback Learning with Vision-enhanced Penalty Decoding | Sep 30, 2024 | HallucinationObject | CodeCode Available | 0 |
| LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation | Sep 30, 2024 | Code GenerationHallucination | CodeCode Available | 0 |
| MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models | Sep 29, 2024 | Hallucination | —Unverified | 0 |
| DENEB: A Hallucination-Robust Automatic Evaluation Metric for Image Captioning | Sep 28, 2024 | HallucinationImage Captioning | —Unverified | 0 |
| HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection | Sep 26, 2024 | Hallucination | CodeCode Available | 0 |
| Enhancing Guardrails for Safe and Secure Healthcare AI | Sep 25, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated Texts | Sep 25, 2024 | Hallucination | CodeCode Available | 0 |
| RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing Systems | Sep 25, 2024 | Hallucination | —Unverified | 0 |
| EventHallusion: Diagnosing Event Hallucinations in Video LLMs | Sep 25, 2024 | HallucinationInstruction Following | CodeCode Available | 1 |
| A Unified Hallucination Mitigation Framework for Large Vision-Language Models | Sep 24, 2024 | HallucinationQuestion Answering | CodeCode Available | 0 |