SOTAVerified

Hallucination

Papers

Showing 851860 of 1816 papers

TitleStatusHype
A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs0
Aligner: Efficient Alignment by Learning to Correct0
Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning0
Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification0
From Misleading Queries to Accurate Answers: A Three-Stage Fine-Tuning Method for LLMs0
From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models0
Comparing Computational Architectures for Automated Journalism0
Incremental Scene Synthesis0
From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs0
SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents0
Show:102550
← PrevPage 86 of 182Next →

No leaderboard results yet.