| A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery | Nov 16, 2024 | Causal DiscoveryHallucination | —Unverified | 0 |
| Chain-of-Programming (CoP) : Empowering Large Language Models for Geospatial Code Generation | Nov 16, 2024 | Code GenerationData Visualization | —Unverified | 0 |
| ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models | Nov 16, 2024 | HallucinationVideo Generation | —Unverified | 0 |
| Thinking Before Looking: Improving Multimodal LLM Reasoning via Mitigating Visual Hallucination | Nov 15, 2024 | HallucinationMultimodal Reasoning | CodeCode Available | 1 |
| Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity | Nov 15, 2024 | Contrastive LearningHallucination | —Unverified | 0 |
| Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs | Nov 15, 2024 | Hallucination | —Unverified | 0 |
| Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization | Nov 15, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| DAHL: Domain-specific Automated Hallucination Evaluation of Long-Form Text through a Benchmark Dataset in Biomedicine | Nov 14, 2024 | FormHallucination | CodeCode Available | 0 |
| On the Limits of Language Generation: Trade-Offs Between Hallucination and Mode Collapse | Nov 14, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLM Hallucination Reasoning with Zero-shot Knowledge Test | Nov 14, 2024 | Hallucination | —Unverified | 0 |