SOTAVerified

Hallucination

Papers

Showing 110 of 1816 papers

TitleStatusHype
Mitigating Object Hallucinations via Sentence-Level Early InterventionCode1
ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way0
UQLM: A Python Package for Uncertainty Quantification in Large Language ModelsCode5
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding0
DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning0
The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems0
GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language ModelsCode0
Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits CalibrationCode0
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation0
Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models0
Show:102550
← PrevPage 1 of 182Next →

No leaderboard results yet.