SOTAVerified

Hallucination

Papers

Showing 1120 of 1816 papers

TitleStatusHype
Uncertainty Quantification for Language Models: A Suite of Black-Box, White-Box, LLM Judge, and Ensemble ScorersCode5
Lean Copilot: Large Language Models as Copilots for Theorem Proving in LeanCode5
Weakly Supervised Detection of Hallucinations in LLM ActivationsCode5
Ferret: Refer and Ground Anything Anywhere at Any GranularityCode5
Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language ModelCode5
LettuceDetect: A Hallucination Detection Framework for RAG ApplicationsCode4
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
Halu-J: Critique-Based Hallucination JudgeCode4
Hallucination of Multimodal Large Language Models: A SurveyCode4
Show:102550
← PrevPage 2 of 182Next →

No leaderboard results yet.