SOTAVerified

Hallucination

Papers

Showing 201225 of 1816 papers

TitleStatusHype
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
ProAPO: Progressively Automatic Prompt Optimization for Visual ClassificationCode1
Hallucination Detection in LLMs Using Spectral Features of Attention MapsCode1
LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking PreferencesCode1
R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge GraphsCode1
Large Language Models for Multi-Robot Systems: A SurveyCode1
DAMO: Data- and Model-aware Alignment of Multi-modal LLMsCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
ChartInsighter: An Approach for Mitigating Hallucination in Time-series Chart Summary Generation with A Benchmark DatasetCode1
Knowledge Graph-based Retrieval-Augmented Generation for Schema MatchingCode1
Multimodal LLMs Can Reason about Aesthetics in Zero-ShotCode1
VASparse: Towards Efficient Visual Hallucination Mitigation for Large Vision-Language Model via Visual-Aware SparsificationCode1
ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition BenchmarkCode1
Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration DecodingCode1
VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token SparsificationCode1
Octopus: Alleviating Hallucination via Dynamic Contrastive DecodingCode1
Harnessing Large Language Models for Knowledge Graph Question Answering via Adaptive Multi-Aspect Retrieval-AugmentationCode1
Extract Free Dense Misalignment from CLIPCode1
Filter-then-Generate: Large Language Models with Structure-Text Adapter for Knowledge Graph CompletionCode1
Can LLMs be Good Graph Judge for Knowledge Graph Construction?Code1
VidHal: Benchmarking Temporal Hallucinations in Vision LLMsCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive DecodingCode1
Thinking Before Looking: Improving Multimodal LLM Reasoning via Mitigating Visual HallucinationCode1
AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information AssistantCode1
Show:102550
← PrevPage 9 of 73Next →

No leaderboard results yet.