SOTAVerified

Hallucination

Papers

Showing 1120 of 1816 papers

TitleStatusHype
Uncertainty Quantification for Language Models: A Suite of Black-Box, White-Box, LLM Judge, and Ensemble ScorersCode5
Weakly Supervised Detection of Hallucinations in LLM ActivationsCode5
DeepEyes: Incentivizing "Thinking with Images" via Reinforcement LearningCode5
Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language ModelCode5
Lean Copilot: Large Language Models as Copilots for Theorem Proving in LeanCode5
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
Multimodal Chain-of-Thought Reasoning in Language ModelsCode4
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in ChineseCode4
Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language ModelsCode4
Show:102550
← PrevPage 2 of 182Next →

No leaderboard results yet.