| Treble Counterfactual VLMs: A Causal Approach to Hallucination | Mar 8, 2025 | Autonomous Drivingcounterfactual | CodeCode Available | 0 |
| Maximum Hallucination Standards for Domain-Specific Large Language Models | Mar 7, 2025 | AttributeHallucination | —Unverified | 0 |
| SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs | Mar 7, 2025 | ClusteringHallucination | —Unverified | 0 |
| TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction | Mar 6, 2025 | HallucinationLanguage Modeling | —Unverified | 0 |
| LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression | Mar 6, 2025 | BenchmarkingCommon Sense Reasoning | CodeCode Available | 0 |
| Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias | Mar 5, 2025 | DenoisingHallucination | —Unverified | 0 |
| Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation | Mar 5, 2025 | Hallucination | —Unverified | 0 |
| Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language Models | Mar 5, 2025 | HallucinationInstruction Following | CodeCode Available | 11 |
| See What You Are Told: Visual Attention Sink in Large Multimodal Models | Mar 5, 2025 | Hallucination | —Unverified | 0 |
| DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models | Mar 5, 2025 | HallucinationText Generation | —Unverified | 0 |