| Mitigating Open-Vocabulary Caption Hallucinations | Dec 6, 2023 | DiversityHallucination | CodeCode Available | 1 |
| Weakly Supervised Detection of Hallucinations in LLM Activations | Dec 5, 2023 | HallucinationLanguage Modeling | CodeCode Available | 5 |
| Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption Rewrites | Dec 4, 2023 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language Models | Dec 3, 2023 | HallucinationVisual Grounding | CodeCode Available | 0 |
| RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback | Dec 1, 2023 | HallucinationImage Captioning | CodeCode Available | 6 |
| On Exploring the Reasoning Capability of Large Language Models with Knowledge Graphs | Dec 1, 2023 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation | Nov 29, 2023 | counterfactualHallucination | —Unverified | 0 |
| OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation | Nov 29, 2023 | Hallucination | CodeCode Available | 2 |
| How to Build an AI Tutor That Can Adapt to Any Course Using Knowledge Graph-Enhanced Retrieval-Augmented Generation (KG-RAG) | Nov 29, 2023 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Combating the "Sameness" in AI Art: Reflections on the Interactive AI Installation Fencing Hallucination | Nov 28, 2023 | Hallucination | —Unverified | 0 |