SOTAVerified

Hallucination

Papers

Showing 401425 of 1816 papers

TitleStatusHype
Assessing the use of Diffusion models for motion artifact correction in brain MRI0
MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Differentially Private Steering for Large Language Model AlignmentCode0
Open-Source Retrieval Augmented Generation Framework for Retrieving Accurate Medication Insights from Formularies for African Healthcare Workers0
Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization0
Few-Shot Optimized Framework for Hallucination Detection in Resource-Limited NLP Systems0
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis0
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge GraphCode2
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
Hallucinations Can Improve Large Language Models in Drug Discovery0
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF0
OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language ModelsCode0
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Hallucination Mitigation using Agentic AI Natural Language-Based FrameworksCode0
ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature0
FRAG: A Flexible Modular Framework for Retrieval-Augmented Generation based on Knowledge Graphs0
Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models0
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
Show:102550
← PrevPage 17 of 73Next →

No leaderboard results yet.