SOTAVerified

Hallucination

Papers

Showing 76100 of 1816 papers

TitleStatusHype
A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction0
Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration0
R3-RAG: Learning Step-by-Step Reasoning and Retrieval for LLMs via Reinforcement LearningCode1
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language ModelsCode0
Attention! You Vision Language Model Could Be Maliciously Manipulated0
Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical SupervisionCode0
Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models0
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System CollaborationCode2
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs0
Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs0
LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models0
CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language ModelsCode0
GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling0
Removal of Hallucination on Hallucination: Debate-Augmented RAGCode1
MedScore: Factuality Evaluation of Free-Form Medical AnswersCode0
More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models0
Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection0
keepitsimple at SemEval-2025 Task 3: LLM-Uncertainty based Approach for Multilingual Hallucination Span DetectionCode0
LLM-Powered Agents for Navigating Venice's Historical Cadastre0
Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems0
Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs0
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head SuppressionCode1
Steering LVLMs via Sparse Autoencoder for Hallucination Mitigation0
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding0
Show:102550
← PrevPage 4 of 73Next →

No leaderboard results yet.