SOTAVerified

Hallucination

Papers

Showing 526550 of 1816 papers

TitleStatusHype
DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models0
Can LLMs be Good Graph Judge for Knowledge Graph Construction?Code1
Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach0
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning0
VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models0
A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs0
AI2T: Building Trustable AI Tutors by Interactively Teaching a Self-Aware Learning Agent0
VidHal: Benchmarking Temporal Hallucinations in Vision LLMsCode1
AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge ReasoningCode1
Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models0
O1 Replication Journey -- Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson?Code7
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive DecodingCode1
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensCode2
Ontology-Constrained Generation of Domain-Specific Clinical SummariesCode0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
Detecting Hallucinations in Virtual Histology with Neural Precursors0
Leveraging LLMs for Legacy Code Modernization: Challenges and Opportunities for LLM-Generated Documentation0
Sycophancy in Large Language Models: Causes and Mitigations0
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study0
VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty EstimationCode0
Mitigating Knowledge Conflicts in Language Model-Driven Question Answering0
Enabling Explainable Recommendation in E-commerce with LLM-powered Product Knowledge Graph0
INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Show:102550
← PrevPage 22 of 73Next →

No leaderboard results yet.