SOTAVerified

Hallucination

Papers

Showing 101150 of 1816 papers

TitleStatusHype
MLLM can see? Dynamic Correction Decoding for Hallucination MitigationCode2
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
One-for-More: Continual Diffusion Model for Anomaly DetectionCode2
MeMemo: On-device Retrieval Augmentation for Private and Personalized Text GenerationCode2
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language ModelsCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Enabling Large Language Models to Generate Text with CitationsCode2
mDPO: Conditional Preference Optimization for Multimodal Large Language ModelsCode2
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention MapsCode2
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language ModelsCode2
Controllable and Reliable Knowledge-Intensive Task-Oriented Conversational Agents with Declarative Genie WorksheetsCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-AllocationCode2
Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language ModelsCode2
Lawyer LLaMA Technical ReportCode2
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge GraphCode2
TimeSuite: Improving MLLMs for Long Video Understanding via Grounded TuningCode2
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual CheckingCode2
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful SpaceCode2
Knowledge Graph-Guided Retrieval Augmented GenerationCode2
Less is More: Mitigating Multimodal Hallucination from an EOS Decision PerspectiveCode2
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference AlignmentCode2
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination MitigationCode2
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step QuestionsCode2
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language ModelsCode2
A Survey on Hallucination in Large Vision-Language ModelsCode2
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine TranslationCode2
Image Textualization: An Automatic Framework for Creating Accurate and Detailed Image DescriptionsCode2
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
Granite GuardianCode2
Dynamic Parametric Retrieval Augmented Generation for Test-time Knowledge EnhancementCode2
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
Benchmarking Large Language Models in Retrieval-Augmented GenerationCode2
MLAgentBench: Evaluating Language Agents on Machine Learning ExperimentationCode2
GPT-NER: Named Entity Recognition via Large Language ModelsCode2
FreshLLMs: Refreshing Large Language Models with Search Engine AugmentationCode2
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategiesCode2
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning EvaluationCode2
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open QuestionsCode2
Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective ResamplingCode2
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
DeliLaw: A Chinese Legal Counselling System Based on a Large Language ModelCode2
Show:102550
← PrevPage 3 of 37Next →

No leaderboard results yet.