SOTAVerified

Hallucination

Papers

Showing 411420 of 1816 papers

TitleStatusHype
Evaluating the Quality of Hallucination Benchmarks for Large Vision-Language ModelsCode1
Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and BeyondCode1
Evaluation and Analysis of Hallucination in Large Vision-Language ModelsCode1
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language GenerationCode1
EventHallusion: Diagnosing Event Hallucinations in Video LLMsCode1
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic PapersCode1
Entity-level Factual Consistency of Abstractive Text SummarizationCode1
Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI FeedbackCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Entity-Based Knowledge Conflicts in Question AnsweringCode1
Show:102550
← PrevPage 42 of 182Next →

No leaderboard results yet.