SOTAVerified

Hallucination

Papers

Showing 9511000 of 1816 papers

TitleStatusHype
Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking0
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection0
Thutmose Tagger: Single-pass neural model for Inverse Text Normalization0
Tianyi: A Traditional Chinese Medicine all-rounder language model and its Real-World Clinical Practice0
TLDR: Token-Level Detective Reward Model for Large Vision Language Models0
TN-Eval: Rubric and Evaluation Protocols for Measuring the Quality of Behavioral Therapy Notes0
Token Preference Optimization with Self-Calibrated Visual-Anchored Rewards for Hallucination Mitigation0
Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine0
Comprehensive Evaluation of Large Language Models for Topic Modeling0
Toward Personalizing Quantum Computing Education: An Evolutionary LLM-Powered Approach0
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage0
Towards Analyzing and Mitigating Sycophancy in Large Vision-Language Models0
Towards a Reliable Offline Personal AI Assistant for Long Duration Spaceflight0
CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks0
Towards Clinical Encounter Summarization: Learning to Compose Discharge Summaries from Prior Notes0
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework0
Towards Mitigating Hallucination in Large Language Models via Self-Reflection0
Towards Multi-Source Retrieval-Augmented Generation via Synergizing Reasoning and Preference-Driven Retrieval0
Towards Omnidirectional Reasoning with 360-R1: A Dataset, Benchmark, and GRPO-based Method0
Towards reducing hallucination in extracting information from financial reports using Large Language Models0
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models0
Towards Robust Evaluation of STEM Education: Leveraging MLLMs in Project-Based Learning0
Towards Trustable Language Models: Investigating Information Quality of Large Language Models0
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias0
Towards Verifiable Text Generation with Evolving Memory and Self-Reflection0
Toxicity in Multilingual Machine Translation at Scale0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode0
Training Dynamics for Text Summarization Models0
Training Dynamics for Text Summarization Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?0
Transforming Sequence Tagging Into A Seq2Seq Task0
Trapping LLM Hallucinations Using Tagged Context Prompts0
Tricking Retrievers with Influential Tokens: An Efficient Black-Box Corpus Poisoning Attack0
Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models0
TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking0
Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders0
TRUST -- Transformer-Driven U-Net for Sparse Target Recovery0
TruthFlow: Truthful LLM Generation via Representation Flow Correction0
Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach0
Two-Layer Retrieval-Augmented Generation Framework for Low-Resource Medical Question Answering Using Reddit Data: Proof-of-Concept Study0
Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
Uncertainty Aware Review Hallucination for Science Article Classification0
Uncertainty-o: One Model-agnostic Framework for Unveiling Uncertainty in Large Multimodal Models0
UNCLE: Uncertainty Expressions in Long-Form Generation0
Understanding Alignment in Multimodal LLMs: A Comprehensive Study0
Understanding and predicting user dissatisfaction in a neural generative chatbot0
Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation0
Show:102550
← PrevPage 20 of 37Next →

No leaderboard results yet.