SOTAVerified

Hallucination

Papers

Showing 531540 of 1816 papers

TitleStatusHype
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning0
AI2T: Building Trustable AI Tutors by Interactively Teaching a Self-Aware Learning Agent0
VidHal: Benchmarking Temporal Hallucinations in Vision LLMsCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models0
O1 Replication Journey -- Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson?Code7
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive DecodingCode1
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
Ontology-Constrained Generation of Domain-Specific Clinical SummariesCode0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
Show:102550
← PrevPage 54 of 182Next →

No leaderboard results yet.