| Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study | May 29, 2024 | Answer GenerationHallucination | —Unverified | 0 | 0 |
| Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs | May 26, 2025 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models | Feb 22, 2025 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| Uncertainty Aware Review Hallucination for Science Article Classification | Aug 1, 2021 | ClassificationHallucination | —Unverified | 0 | 0 |
| Uncertainty-o: One Model-agnostic Framework for Unveiling Uncertainty in Large Multimodal Models | Jun 9, 2025 | Hallucination | —Unverified | 0 | 0 |
| UNCLE: Uncertainty Expressions in Long-Form Generation | May 22, 2025 | 4kForm | —Unverified | 0 | 0 |
| Understanding Alignment in Multimodal LLMs: A Comprehensive Study | Jul 2, 2024 | Hallucination | —Unverified | 0 | 0 |
| Understanding and predicting user dissatisfaction in a neural generative chatbot | Jul 1, 2021 | ChatbotHallucination | —Unverified | 0 | 0 |
| Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation | Nov 29, 2023 | counterfactualHallucination | —Unverified | 0 | 0 |