SOTAVerified

Hallucination

Papers

Showing 351375 of 1816 papers

TitleStatusHype
`Generalization is hallucination' through the lens of tensor completions0
LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking PreferencesCode1
LettuceDetect: A Hallucination Detection Framework for RAG ApplicationsCode4
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
ZiGong 1.0: A Large Language Model for Financial Credit0
The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination0
PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric PruningCode2
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
Verify when Uncertain: Beyond Self-Consistency in Black Box Hallucination Detection0
Hallucination Detection in Large Language Models with Metamorphic Relations0
Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs0
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment0
Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning0
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models0
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis0
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination EvaluationCode0
Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge GraphsCode1
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and AmendmentCode2
Can Your Uncertainty Scores Detect Hallucinated Entity?0
Valuable Hallucinations: Realizable Non-realistic Propositions0
Show:102550
← PrevPage 15 of 73Next →

No leaderboard results yet.