| LargePiG: Your Large Language Model is Secretly a Pointer Generator | Oct 15, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models | Oct 15, 2024 | HallucinationLarge Language Model | CodeCode Available | 0 |
| Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructions | Oct 15, 2024 | Hallucination | —Unverified | 0 |
| Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs | Oct 15, 2024 | Hallucination | —Unverified | 0 |
| Can Structured Data Reduce Epistemic Uncertainty? | Oct 14, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Parenting: Optimizing Knowledge Selection of Retrieval-Augmented Language Models with Parameter Decoupling and Tailored Tuning | Oct 14, 2024 | HallucinationRAG | —Unverified | 0 |
| SkillAggregation: Reference-free LLM-Dependent Aggregation | Oct 14, 2024 | ChatbotHallucination | —Unverified | 0 |
| Medico: Towards Hallucination Detection and Correction with Multi-source Evidence Fusion | Oct 14, 2024 | Hallucination | —Unverified | 0 |
| Honest AI: Fine-Tuning "Small" Language Models to Say "I Don't Know", and Reducing Hallucination in RAG | Oct 13, 2024 | HallucinationRAG | —Unverified | 0 |
| Collu-Bench: A Benchmark for Predicting Language Model Hallucinations in Code | Oct 13, 2024 | Code GenerationHallucination | —Unverified | 0 |