SOTAVerified

Hallucination

Papers

Showing 326350 of 1816 papers

TitleStatusHype
Knowledge Verification to Nip Hallucination in the BudCode1
The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language ModelsCode1
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language ModelsCode1
Advancing TTP Analysis: Harnessing the Power of Large Language Models with Retrieval Augmented GenerationCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
Context-aware Decoding Reduces Hallucination in Query-focused SummarizationCode1
On Early Detection of Hallucinations in Factual Question AnsweringCode1
"Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented GenerationCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
Mitigating Open-Vocabulary Caption HallucinationsCode1
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference OptimizationCode1
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained GenerationCode1
Enhancing Uncertainty-Based Hallucination Detection with Stronger FocusCode1
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction DataCode1
R-Tuning: Instructing Large Language Models to Say `I Don't Know'Code1
Investigating Hallucinations in Pruned Large Language Models for Abstractive SummarizationCode1
AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination EvaluationCode1
Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided RevisionCode1
Finding and Editing Multi-Modal Neurons in Pre-Trained TransformersCode1
Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference ChallengesCode1
SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check ConsistencyCode1
CRUSH4SQL: Collective Retrieval Using Schema Hallucination For Text2SQLCode1
Collaborative Large Language Model for Recommender SystemsCode1
Show:102550
← PrevPage 14 of 73Next →

No leaderboard results yet.