SOTAVerified

Hallucination

Papers

Showing 11011125 of 1816 papers

TitleStatusHype
Enhanced document retrieval with topic embeddings0
CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs0
Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions0
Plan with Code: Comparing approaches for robust NL to DSL generation0
CodeMirage: Hallucinations in Code Generated by Large Language Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection0
Reference-free Hallucination Detection for Large Vision-Language Models0
Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text0
FiSTECH: Financial Style Transfer to Enhance Creativity without Hallucinations in LLMs0
Order Matters in Hallucination: Reasoning Order as Benchmark and Reflexive Prompting for Large-Language-ModelsCode0
Handwritten Code Recognition for Pen-and-Paper CS EducationCode0
KnowPO: Knowledge-aware Preference Optimization for Controllable Knowledge Selection in Retrieval-Augmented Language Models0
MAO: A Framework for Process Model Generation with Multi-Agent Orchestration0
Improving Zero-Shot ObjectNav with Generative Communication0
Misinforming LLMs: vulnerabilities, challenges and opportunities0
Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models0
Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Cost-Effective Hallucination Detection for LLMs0
Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate0
VILA^2: VILA Augmented VILA0
WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries0
Show:102550
← PrevPage 45 of 73Next →

No leaderboard results yet.