| Reducing Tool Hallucination via Reliability Alignment | Dec 5, 2024 | HallucinationText Generation | —Unverified | 0 |
| Reference-free Hallucination Detection for Large Vision-Language Models | Aug 11, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models | Feb 19, 2025 | HallucinationLanguage Modeling | —Unverified | 0 |
| Refine Knowledge of Large Language Models via Adaptive Contrastive Learning | Feb 11, 2025 | Contrastive LearningHallucination | —Unverified | 0 |
| Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning | Apr 23, 2024 | ARCCommon Sense Reasoning | —Unverified | 0 |
| Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation | May 29, 2025 | FormHallucination | —Unverified | 0 |
| Reinforcing Question Answering Agents with Minimalist Policy Gradient Optimization | May 20, 2025 | HallucinationIn-Context Learning | —Unverified | 0 |
| Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback | Mar 27, 2024 | Hallucination | —Unverified | 0 |
| Relational Graph Learning for Grounded Video Description Generation | Dec 2, 2021 | Graph LearningHallucination | —Unverified | 0 |
| Long-horizon Embodied Planning with Implicit Logical Inference and Hallucination Mitigation | Sep 24, 2024 | DiversityHallucination | —Unverified | 0 |