SOTAVerified

Hallucination

Papers

Showing 51100 of 1816 papers

TitleStatusHype
Retrieval Head Mechanistically Explains Long-Context FactualityCode3
View Selection for 3D Captioning via Diffusion RankingCode3
Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language ModelsCode3
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon GenerationCode3
KnowAgent: Knowledge-Augmented Planning for LLM-Based AgentsCode3
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
LLMDFA: Analyzing Dataflow in Code with Large Language ModelsCode3
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and RefinementCode3
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
When Large Language Models Meet Vector Databases: A SurveyCode3
Evaluating Hallucinations in Chinese Large Language ModelsCode3
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language ModelsCode3
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on WikipediaCode3
Towards An End-to-End Framework for Flow-Guided Video InpaintingCode3
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative ModelsCode3
DiscoSG: Towards Discourse-Level Text Scene Graph Parsing through Iterative Graph RefinementCode2
FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning EvaluationCode2
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System CollaborationCode2
DyFo: A Training-Free Dynamic Focus Visual Search for Enhancing LMMs in Fine-Grained Visual UnderstandingCode2
Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective ResamplingCode2
Dynamic Parametric Retrieval Augmented Generation for Test-time Knowledge EnhancementCode2
RARE: Retrieval-Augmented Reasoning ModelingCode2
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
WMNav: Integrating Vision-Language Models into World Models for Object Goal NavigationCode2
One-for-More: Continual Diffusion Model for Anomaly DetectionCode2
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric PruningCode2
Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and AmendmentCode2
Knowledge Graph-Guided Retrieval Augmented GenerationCode2
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information SteeringCode2
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large Language Model on Knowledge GraphCode2
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local AttentionCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
Granite GuardianCode2
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference OptimizationCode2
TimeSuite: Improving MLLMs for Long Video Understanding via Grounded TuningCode2
Mitigating Object Hallucination via Concentric Causal AttentionCode2
Reducing Hallucinations in Vision-Language Models via Latent Space SteeringCode2
MLLM can see? Dynamic Correction Decoding for Hallucination MitigationCode2
VideoAgent: Self-Improving Video GenerationCode2
ReFIR: Grounding Large Restoration Models with Retrieval AugmentationCode2
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
Differential TransformerCode2
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language ModelsCode2
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"Code2
SSL: A Self-similarity Loss for Improving Generative Image Super-resolutionCode2
Show:102550
← PrevPage 2 of 37Next →

No leaderboard results yet.