SOTAVerified

Hallucination

Papers

Showing 301350 of 1816 papers

TitleStatusHype
The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?Code1
XReal: Realistic Anatomy and Pathology-Aware X-ray Generation via Controllable Diffusion ModelCode1
Federated Recommendation via Hybrid Retrieval Augmented GenerationCode1
InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated AnswersCode1
CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring Commonsense Reasoning and Long-Tail KnowledgeCode1
DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language ModelsCode1
All in an Aggregated Image for In-Image LearningCode1
Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean DiscrepancyCode1
Citation-Enhanced Generation for LLM-based ChatbotsCode1
A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language ModelsCode1
Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided DecodingCode1
Visual Hallucinations of Multi-modal Large Language ModelsCode1
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue SummarizationCode1
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language ModelsCode1
Uncertainty Quantification for In-Context Learning of Large Language ModelsCode1
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Into the Unknown: Self-Learning Large Language ModelsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task AmbiguityCode1
INSIDE: LLMs' Internal States Retain the Power of Hallucination DetectionCode1
Training Language Models to Generate Text with Citations via Fine-grained RewardsCode1
Unified Hallucination Detection for Multimodal Large Language ModelsCode1
Skip : A Simple Method to Reduce Hallucination in Large Vision-Language ModelsCode1
K-QA: A Real-World Medical Q&A BenchmarkCode1
How well can a large language model explain business processes as perceived by users?Code1
Knowledge Verification to Nip Hallucination in the BudCode1
The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language ModelsCode1
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language ModelsCode1
Advancing TTP Analysis: Harnessing the Power of Large Language Models with Retrieval Augmented GenerationCode1
Alleviating Hallucinations of Large Language Models through Induced HallucinationsCode1
Context-aware Decoding Reduces Hallucination in Query-focused SummarizationCode1
On Early Detection of Hallucinations in Factual Question AnsweringCode1
"Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented GenerationCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
Mitigating Open-Vocabulary Caption HallucinationsCode1
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference OptimizationCode1
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained GenerationCode1
Enhancing Uncertainty-Based Hallucination Detection with Stronger FocusCode1
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction DataCode1
R-Tuning: Instructing Large Language Models to Say `I Don't Know'Code1
Investigating Hallucinations in Pruned Large Language Models for Abstractive SummarizationCode1
AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination EvaluationCode1
Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided RevisionCode1
Finding and Editing Multi-Modal Neurons in Pre-Trained TransformersCode1
Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference ChallengesCode1
SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check ConsistencyCode1
CRUSH4SQL: Collective Retrieval Using Schema Hallucination For Text2SQLCode1
Collaborative Large Language Model for Recommender SystemsCode1
Show:102550
← PrevPage 7 of 37Next →

No leaderboard results yet.