SOTAVerified

Hallucination

Papers

Showing 13011350 of 1816 papers

TitleStatusHype
Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models0
Theory of Hallucinations based on Equivariance0
Context-aware Decoding Reduces Hallucination in Query-focused SummarizationCode1
Reducing Hallucinations: Enhancing VQA for Flood Disaster Damage Assessment with Visual Contexts0
Experimenting with Large Language Models and vector embeddings in NASA SciX0
Quantifying Bias in Text-to-Image Generative Models0
On Early Detection of Hallucinations in Factual Question AnsweringCode1
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRACode0
"Knowing When You Don't Know": A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented GenerationCode1
Retrieval-Augmented Generation for Large Language Models: A SurveyCode4
Silkie: Preference Distillation for Large Visual Language Models0
Towards Verifiable Text Generation with Evolving Memory and Self-Reflection0
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens0
Improving Factual Error Correction by Learning to Inject Factual ErrorsCode0
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Evaluating ChatGPT as a Question Answering System: A Comprehensive Analysis and Comparison with Existing Models0
Context Tuning for Retrieval Augmented Generation0
Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color ConsistencyCode1
DelucionQA: Detecting Hallucinations in Domain-specific Question Answering0
HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models0
Mitigating Open-Vocabulary Caption HallucinationsCode1
Weakly Supervised Detection of Hallucinations in LLM ActivationsCode5
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesCode1
Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language ModelsCode0
RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human FeedbackCode6
On Exploring the Reasoning Capability of Large Language Models with Knowledge Graphs0
Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation0
OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-AllocationCode2
How to Build an AI Tutor That Can Adapt to Any Course Using Knowledge Graph-Enhanced Retrieval-Augmented Generation (KG-RAG)0
Combating the "Sameness" in AI Art: Reflections on the Interactive AI Installation Fencing Hallucination0
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference OptimizationCode1
Mitigating Hallucination in Visual Language Models with Visual Supervision0
Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination0
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained GenerationCode1
Calibrated Language Models Must Hallucinate0
Challenges of Large Language Models for Mental Health Counseling0
Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach0
Minimizing Factual Inconsistency and Hallucination in Large Language Models0
Enhancing Uncertainty-Based Hallucination Detection with Stronger FocusCode1
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction DataCode1
Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting0
Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
Control in Hybrid Chatbots0
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration0
Chain of Visual Perception: Harnessing Multimodal Large Language Models for Zero-shot Camouflaged Object DetectionCode0
Journey of Hallucination-minimized Generative AI Solutions for Financial Decision Makers0
R-Tuning: Instructing Large Language Models to Say `I Don't Know'Code1
Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?Code0
Show:102550
← PrevPage 27 of 37Next →

No leaderboard results yet.