| Hydra: An Agentic Reasoning Approach for Enhancing Adversarial Robustness and Mitigating Hallucinations in Vision-Language Models | Apr 19, 2025 | Adversarial AttackAdversarial Defense | —Unverified | 0 |
| Multi-Stage Retrieval for Operational Technology Cybersecurity Compliance Using Large Language Models: A Railway Casestudy | Apr 18, 2025 | HallucinationLogical Reasoning | —Unverified | 0 |
| Analyzing LLMs' Knowledge Boundary Cognition Across Languages Through the Lens of Internal Representations | Apr 18, 2025 | Hallucination | CodeCode Available | 1 |
| Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling | Apr 17, 2025 | Hallucination | CodeCode Available | 2 |
| VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models | Apr 17, 2025 | HallucinationVideo Understanding | CodeCode Available | 1 |
| Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training | Apr 17, 2025 | Caption GenerationHallucination | —Unverified | 0 |
| Aspect-Based Summarization with Self-Aspect Retrieval Enhanced Generation | Apr 17, 2025 | HallucinationIn-Context Learning | —Unverified | 0 |
| Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations | Apr 17, 2025 | DecoderHallucination | CodeCode Available | 0 |
| QLLM: Do We Really Need a Mixing Network for Credit Assignment in Multi-Agent Reinforcement Learning? | Apr 17, 2025 | HallucinationMulti-agent Reinforcement Learning | —Unverified | 0 |
| Naming is framing: How cybersecurity's language problems are repeating in AI governance | Apr 16, 2025 | Hallucination | —Unverified | 0 |