| A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM Outputs | May 13, 2025 | HallucinationUncertainty Quantification | CodeCode Available | 1 |
| Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback | Apr 22, 2024 | AttributeHallucination | CodeCode Available | 1 |
| FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs | Oct 17, 2024 | DiversityHallucination | CodeCode Available | 1 |
| Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers | Nov 13, 2023 | Hallucinationknowledge editing | CodeCode Available | 1 |
| Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation | Mar 25, 2025 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| Exploring the Transferability of Visual Prompting for Multimodal Large Language Models | Apr 17, 2024 | HallucinationMultimodal Reasoning | CodeCode Available | 1 |
| EventHallusion: Diagnosing Event Hallucinations in Video LLMs | Sep 25, 2024 | HallucinationInstruction Following | CodeCode Available | 1 |
| Evaluation and Analysis of Hallucination in Large Vision-Language Models | Aug 29, 2023 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| Extract Free Dense Misalignment from CLIP | Dec 24, 2024 | HallucinationImage Generation | CodeCode Available | 1 |
| Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond | Jun 16, 2023 | BenchmarkingEvidence Selection | CodeCode Available | 1 |