SOTAVerified

Hallucination

Papers

Showing 10311040 of 1816 papers

TitleStatusHype
Harmonic LLMs are Trustworthy0
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation0
A robust and scalable framework for hallucination detection in virtual tissue staining and digital pathology0
Hallucination of Multimodal Large Language Models: A SurveyCode4
MMAC-Copilot: Multi-modal Agent Collaboration Operating Copilot0
SERPENT-VLM : Self-Refining Radiology Report Generation Using Vision Language Models0
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities0
Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations?0
Retrieval Head Mechanistically Explains Long-Context FactualityCode3
KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering0
Show:102550
← PrevPage 104 of 182Next →

No leaderboard results yet.