| Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding | Feb 23, 2024 | HallucinationObject | CodeCode Available | 1 |
| Visual Hallucinations of Multi-modal Large Language Models | Feb 22, 2024 | DiversityHallucination | CodeCode Available | 1 |
| TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization | Feb 20, 2024 | HallucinationNews Summarization | CodeCode Available | 1 |
| Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models | Feb 18, 2024 | HallucinationObject | CodeCode Available | 1 |
| EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models | Feb 15, 2024 | HallucinationObject Hallucination | CodeCode Available | 1 |
| Uncertainty Quantification for In-Context Learning of Large Language Models | Feb 15, 2024 | HallucinationIn-Context Learning | CodeCode Available | 1 |
| Into the Unknown: Self-Learning Large Language Models | Feb 14, 2024 | HallucinationSelf-Learning | CodeCode Available | 1 |
| Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations | Feb 10, 2024 | DiagnosticHallucination | CodeCode Available | 1 |
| Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity | Feb 9, 2024 | Conformal PredictionHallucination | CodeCode Available | 1 |
| INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection | Feb 6, 2024 | DiversityHallucination | CodeCode Available | 1 |