SOTAVerified

Hallucination

Papers

Showing 171180 of 1816 papers

TitleStatusHype
Safe: Enhancing Mathematical Reasoning in Large Language Models via Retrospective Step-aware Formal VerificationCode1
OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data SynthesisCode1
FlySearch: Exploring how vision-language models exploreCode1
The Hallucination Dilemma: Factuality-Aware Reinforcement Learning for Large Reasoning ModelsCode1
CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language ModelsCode1
R3-RAG: Learning Step-by-Step Reasoning and Retrieval for LLMs via Reinforcement LearningCode1
Removal of Hallucination on Hallucination: Debate-Augmented RAGCode1
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head SuppressionCode1
Know Or Not: a library for evaluating out-of-knowledge base robustnessCode1
Phare: A Safety Probe for Large Language ModelsCode1
Show:102550
← PrevPage 18 of 182Next →

No leaderboard results yet.