SOTAVerified

Hallucination

Papers

Showing 776800 of 1816 papers

TitleStatusHype
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models0
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning0
Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering0
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination DetectionCode1
MedDiT: A Knowledge-Controlled Diffusion Transformer Framework for Dynamic Medical Image Generation in Virtual Simulated Patient0
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference DataCode0
GRATR: Zero-Shot Evidence Graph Retrieval-Augmented Trustworthiness ReasoningCode0
RAG-Optimized Tibetan Tourism LLMs: Enhancing Accuracy and Personalization0
Towards Analyzing and Mitigating Sycophancy in Large Vision-Language Models0
Enhanced document retrieval with topic embeddings0
MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation0
CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs0
Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language ModelsCode1
Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions0
Graph Retrieval-Augmented Generation: A SurveyCode3
Plan with Code: Comparing approaches for robust NL to DSL generation0
CodeMirage: Hallucinations in Code Generated by Large Language Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection0
SSL: A Self-similarity Loss for Improving Generative Image Super-resolutionCode2
Reference-free Hallucination Detection for Large Vision-Language Models0
Show:102550
← PrevPage 32 of 73Next →

No leaderboard results yet.