SOTAVerified

Hallucination

Papers

Showing 401450 of 1816 papers

TitleStatusHype
Assessing the use of Diffusion models for motion artifact correction in brain MRI0
MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
Differentially Private Steering for Large Language Model AlignmentCode0
Open-Source Retrieval Augmented Generation Framework for Retrieving Accurate Medication Insights from Formularies for African Healthcare Workers0
Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization0
Few-Shot Optimized Framework for Hallucination Detection in Resource-Limited NLP Systems0
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis0
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge GraphCode2
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
Hallucinations Can Improve Large Language Models in Drug Discovery0
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF0
OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language ModelsCode0
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Hallucination Mitigation using Agentic AI Natural Language-Based FrameworksCode0
ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature0
Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models0
FRAG: A Flexible Modular Framework for Retrieval-Augmented Generation based on Knowledge Graphs0
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy0
ChartInsighter: An Approach for Mitigating Hallucination in Time-series Chart Summary Generation with A Benchmark DatasetCode1
Knowledge Graph-based Retrieval-Augmented Generation for Schema MatchingCode1
Multimodal LLMs Can Reason about Aesthetics in Zero-ShotCode1
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them0
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
GPT as a Monte Carlo Language Tree: A Probabilistic Perspective0
Fine-tuning Large Language Models for Improving Factuality in Legal Question AnsweringCode0
VASparse: Towards Efficient Visual Hallucination Mitigation for Large Vision-Language Model via Visual-Aware SparsificationCode1
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare0
Hermit Kingdom Through the Lens of Multiple Perspectives: A Case Study of LLM Hallucination on North Korea0
ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition BenchmarkCode1
Seeing with Partial Certainty: Conformal Prediction for Robotic Scene Recognition in Built Environments0
Feedback-Driven Vision-Language Alignment with Minimal Human Supervision0
RAG-Check: Evaluating Multimodal Retrieval Augmented Generation Performance0
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models0
EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
Foundations of GenIR0
CHAIR -- Classifier of Hallucination as ImproverCode0
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
CarbonChat: Large Language Model-Based Corporate Carbon Emission Analysis and Climate Knowledge Q&A System0
Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration DecodingCode1
LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries0
Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection0
Show:102550
← PrevPage 9 of 37Next →

No leaderboard results yet.