SOTAVerified

Hallucination

Papers

Showing 13511360 of 1816 papers

TitleStatusHype
Crafting In-context Examples according to LMs' Parametric KnowledgeCode0
Investigating Hallucinations in Pruned Large Language Models for Abstractive SummarizationCode1
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and RectificationCode0
Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models0
Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models0
Insights into Classifying and Mitigating LLMs' Hallucinations0
Predicting Text Preference Via Structured Comparative Reasoning0
Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided RevisionCode1
AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination EvaluationCode1
Show:102550
← PrevPage 136 of 182Next →

No leaderboard results yet.