SOTAVerified

Hallucination

Papers

Showing 501525 of 1816 papers

TitleStatusHype
Preemptive Hallucination Reduction: An Input-Level Approach for Multimodal Language Model0
Data-efficient Meta-models for Evaluation of Context-based Questions and Answers in LLMs0
Map&Make: Schema Guided Text to Table Generation0
Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation0
Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual InformationCode0
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationCode0
Are Reasoning Models More Prone to Hallucination?0
Evaluation Hallucination in Multi-Round Incomplete Information Lateral-Driven Reasoning Tasks0
SkewRoute: Training-Free LLM Routing for Knowledge Graph Retrieval-Augmented Generation via Score Skewness of Retrieved Context0
Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration0
A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction0
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs0
Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs0
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
Attention! You Vision Language Model Could Be Maliciously Manipulated0
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language ModelsCode0
Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical SupervisionCode0
Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models0
LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models0
GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling0
CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language ModelsCode0
MedScore: Factuality Evaluation of Free-Form Medical AnswersCode0
Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection0
More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models0
keepitsimple at SemEval-2025 Task 3: LLM-Uncertainty based Approach for Multilingual Hallucination Span DetectionCode0
Show:102550
← PrevPage 21 of 73Next →

No leaderboard results yet.