SOTAVerified

Hallucination

Papers

Showing 13261350 of 1816 papers

TitleStatusHype
Towards reducing hallucination in extracting information from financial reports using Large Language Models0
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models0
Towards Robust Evaluation of STEM Education: Leveraging MLLMs in Project-Based Learning0
Towards Trustable Language Models: Investigating Information Quality of Large Language Models0
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias0
Towards Verifiable Text Generation with Evolving Memory and Self-Reflection0
Toxicity in Multilingual Machine Translation at Scale0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode0
Training Dynamics for Text Summarization Models0
Training Dynamics for Text Summarization Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?0
Transforming Sequence Tagging Into A Seq2Seq Task0
Trapping LLM Hallucinations Using Tagged Context Prompts0
Tricking Retrievers with Influential Tokens: An Efficient Black-Box Corpus Poisoning Attack0
Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models0
TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking0
Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders0
TRUST -- Transformer-Driven U-Net for Sparse Target Recovery0
TruthFlow: Truthful LLM Generation via Representation Flow Correction0
Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach0
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
Show:102550
← PrevPage 54 of 73Next →

No leaderboard results yet.