| Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models | Apr 30, 2025 | HallucinationObject | —Unverified | 0 |
| Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs | Apr 30, 2025 | HallucinationHallucination Evaluation | —Unverified | 0 |
| Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object Perception | Apr 29, 2025 | counterfactualHallucination | CodeCode Available | 1 |
| Hallucination by Code Generation LLMs: Taxonomy, Benchmarks, Mitigation, and Challenges | Apr 29, 2025 | Code GenerationHallucination | —Unverified | 0 |
| Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation? | Apr 29, 2025 | HallucinationMachine Translation | —Unverified | 0 |
| An Automated Reinforcement Learning Reward Design Framework with Large Language Model for Cooperative Platoon Coordination | Apr 28, 2025 | Code GenerationHallucination | —Unverified | 0 |
| Explanatory Summarization with Discourse-Driven Planning | Apr 27, 2025 | HallucinationLay Summarization | —Unverified | 0 |
| Uncertainty Quantification for Language Models: A Suite of Black-Box, White-Box, LLM Judge, and Ensemble Scorers | Apr 27, 2025 | HallucinationQuestion Answering | CodeCode Available | 5 |
| Validating Network Protocol Parsers with Traceable RFC Document Interpretation | Apr 25, 2025 | Hallucination | —Unverified | 0 |
| Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction | Apr 24, 2025 | Conformal PredictionHallucination | —Unverified | 0 |