SOTAVerified

Hallucination

Papers

Showing 411420 of 1816 papers

TitleStatusHype
Med-HALT: Medical Domain Hallucination Test for Large Language ModelsCode1
Doc2Query--: When Less is MoreCode1
Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and BeyondCode1
IterGen: Iterative Semantic-aware Structured LLM Generation with BacktrackingCode1
Distinguishing Ignorance from Error in LLM HallucinationsCode1
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language GenerationCode1
Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for HallucinationsCode1
Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI FeedbackCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Lyra: Orchestrating Dual Correction in Automated Theorem ProvingCode1
Show:102550
← PrevPage 42 of 182Next →

No leaderboard results yet.