SOTAVerified

Hallucination

Papers

Showing 511520 of 1816 papers

TitleStatusHype
A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction0
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models0
Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical SupervisionCode0
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs0
Attention! You Vision Language Model Could Be Maliciously Manipulated0
Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs0
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language ModelsCode0
GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling0
LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models0
Show:102550
← PrevPage 52 of 182Next →

No leaderboard results yet.