SOTAVerified

Hallucination

Papers

Showing 181190 of 1816 papers

TitleStatusHype
Phare: A Safety Probe for Large Language ModelsCode1
A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM OutputsCode1
Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language ModelsCode1
Benchmarking LLM Faithfulness in RAG with Evolving LeaderboardsCode1
Invoke Interfaces Only When Needed: Adaptive Invocation for Large Language Models in Question AnsweringCode1
VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations on Synthetic Video UnderstandingCode1
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionCode1
Analyzing LLMs' Knowledge Boundary Cognition Across Languages Through the Lens of Internal RepresentationsCode1
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video ModelsCode1
EmbodiedAgent: A Scalable Hierarchical Approach to Overcome Practical Challenge in Multi-Robot ControlCode1
Show:102550
← PrevPage 19 of 182Next →

No leaderboard results yet.