| RosePO: Aligning LLM-based Recommenders with Human Values | Oct 16, 2024 | HallucinationRecommendation Systems | —Unverified | 0 |
| SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs | Mar 4, 2025 | Hallucination | —Unverified | 0 |
| Safety challenges of AI in medicine in the era of large language models | Sep 11, 2024 | Hallucination | —Unverified | 0 |
| SAG: Style-Aligned Article Generation via Model Collaboration | Oct 4, 2024 | HallucinationInstruction Following | —Unverified | 0 |
| Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis | Jan 26, 2025 | ArticlesHallucination | —Unverified | 0 |
| Scaling Laws for Discriminative Classification in Large Language Models | May 24, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Score-based Generative Priors Guided Model-driven Network for MRI Reconstruction | May 5, 2024 | DenoisingHallucination | —Unverified | 0 |
| Second Order State Hallucinations for Adversarial Attack Mitigation in Formation Control of Multi-Agent Systems | Jun 14, 2025 | Adversarial AttackHallucination | —Unverified | 0 |
| Securing Reliability: A Brief Overview on Enhancing In-Context Learning for Foundation Models | Feb 27, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors | Mar 18, 2024 | HallucinationMotion Planning | —Unverified | 0 |