| Toxicity in Multilingual Machine Translation at Scale | Oct 6, 2022 | HallucinationMachine Translation | —Unverified | 0 | 0 |
| TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction | Mar 6, 2025 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| Trading off Consistency and Dimensionality of Convex Surrogates for the Mode | Feb 16, 2024 | HallucinationInformation Retrieval | —Unverified | 0 | 0 |
| Training Dynamics for Text Summarization Models | Oct 15, 2021 | HallucinationNews Summarization | —Unverified | 0 | 0 |
| Training Dynamics for Text Summarization Models | Nov 16, 2021 | HallucinationNews Summarization | —Unverified | 0 | 0 |
| Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability | Aug 14, 2024 | Hallucination | —Unverified | 0 | 0 |
| Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it? | Sep 23, 2021 | HallucinationTransfer Learning | —Unverified | 0 | 0 |
| Transforming Sequence Tagging Into A Seq2Seq Task | Mar 16, 2022 | HallucinationStructured Prediction | —Unverified | 0 | 0 |
| Trapping LLM Hallucinations Using Tagged Context Prompts | Jun 9, 2023 | Hallucination | —Unverified | 0 | 0 |
| Tricking Retrievers with Influential Tokens: An Efficient Black-Box Corpus Poisoning Attack | Mar 27, 2025 | HallucinationRAG | —Unverified | 0 | 0 |
| Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models | May 1, 2025 | Hallucination | —Unverified | 0 | 0 |
| TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking | May 11, 2025 | Fact CheckingFew-Shot Learning | —Unverified | 0 | 0 |
| Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders | Nov 12, 2024 | DecoderHallucination | —Unverified | 0 | 0 |
| TRUST -- Transformer-Driven U-Net for Sparse Target Recovery | Jun 1, 2025 | DecoderHallucination | —Unverified | 0 | 0 |
| TruthFlow: Truthful LLM Generation via Representation Flow Correction | Feb 6, 2025 | HallucinationTruthfulQA | —Unverified | 0 | 0 |
| Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study | May 29, 2024 | Answer GenerationHallucination | —Unverified | 0 | 0 |
| Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs | May 26, 2025 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models | Feb 22, 2025 | HallucinationQuestion Answering | —Unverified | 0 | 0 |
| Uncertainty Aware Review Hallucination for Science Article Classification | Aug 1, 2021 | ClassificationHallucination | —Unverified | 0 | 0 |
| Uncertainty-o: One Model-agnostic Framework for Unveiling Uncertainty in Large Multimodal Models | Jun 9, 2025 | Hallucination | —Unverified | 0 | 0 |
| UNCLE: Uncertainty Expressions in Long-Form Generation | May 22, 2025 | 4kForm | —Unverified | 0 | 0 |
| Understanding Alignment in Multimodal LLMs: A Comprehensive Study | Jul 2, 2024 | Hallucination | —Unverified | 0 | 0 |
| Understanding and predicting user dissatisfaction in a neural generative chatbot | Jul 1, 2021 | ChatbotHallucination | —Unverified | 0 | 0 |
| Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation | Nov 29, 2023 | counterfactualHallucination | —Unverified | 0 | 0 |