SOTAVerified

Hallucination

Papers

Showing 76100 of 1816 papers

TitleStatusHype
One-for-More: Continual Diffusion Model for Anomaly DetectionCode2
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric PruningCode2
Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and AmendmentCode2
Knowledge Graph-Guided Retrieval Augmented GenerationCode2
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information SteeringCode2
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge GraphCode2
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local AttentionCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
Granite GuardianCode2
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference OptimizationCode2
TimeSuite: Improving MLLMs for Long Video Understanding via Grounded TuningCode2
Mitigating Object Hallucination via Concentric Causal AttentionCode2
Reducing Hallucinations in Vision-Language Models via Latent Space SteeringCode2
MLLM can see? Dynamic Correction Decoding for Hallucination MitigationCode2
VideoAgent: Self-Improving Video GenerationCode2
ReFIR: Grounding Large Restoration Models with Retrieval AugmentationCode2
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
Differential TransformerCode2
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language ModelsCode2
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"Code2
SSL: A Self-similarity Loss for Improving Generative Image Super-resolutionCode2
Show:102550
← PrevPage 4 of 73Next →

No leaderboard results yet.