SOTAVerified

Hallucination

Papers

Showing 11511160 of 1816 papers

TitleStatusHype
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification0
Effectiveness Assessment of Recent Large Vision-Language Models0
Benchmarking Hallucination in Large Language Models based on Unanswerable Math Word ProblemCode0
German also Hallucinates! Inconsistency Detection in News Summaries with the Absinth DatasetCode0
KnowAgent: Knowledge-Augmented Planning for LLM-Based AgentsCode3
InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated AnswersCode1
The Claude 3 Model Family: Opus, Sonnet, Haiku0
Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering0
Quantity Matters: Towards Assessing and Mitigating Number Hallucination in Large Vision-Language Models0
CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail KnowledgeCode1
Show:102550
← PrevPage 116 of 182Next →

No leaderboard results yet.