| Tricking Retrievers with Influential Tokens: An Efficient Black-Box Corpus Poisoning Attack | Mar 27, 2025 | HallucinationRAG | —Unverified | 0 |
| Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models | May 1, 2025 | Hallucination | —Unverified | 0 |
| TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking | May 11, 2025 | Fact CheckingFew-Shot Learning | —Unverified | 0 |
| Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders | Nov 12, 2024 | DecoderHallucination | —Unverified | 0 |
| TRUST -- Transformer-Driven U-Net for Sparse Target Recovery | Jun 1, 2025 | DecoderHallucination | —Unverified | 0 |
| TruthFlow: Truthful LLM Generation via Representation Flow Correction | Feb 6, 2025 | HallucinationTruthfulQA | —Unverified | 0 |
| Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 |
| Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study | May 29, 2024 | Answer GenerationHallucination | —Unverified | 0 |
| Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs | May 26, 2025 | HallucinationQuestion Answering | —Unverified | 0 |
| Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models | Feb 22, 2025 | HallucinationQuestion Answering | —Unverified | 0 |