SOTAVerified

Hallucination

Papers

Showing 751800 of 1816 papers

TitleStatusHype
Safety challenges of AI in medicine in the era of large language models0
MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
LLMs Will Always Hallucinate, and We Need to Live With This0
Detecting Buggy Contracts via Smart Testing0
Generating Faithful and Salient Text from Multimodal DataCode0
Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering0
Vietnamese Legal Information Retrieval in Question-Answering System0
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned ModelsCode0
CLUE: Concept-Level Uncertainty Estimation for Large Language Models0
Improved Single Camera BEV Perception Using Multi-Camera Training0
Multi-Source Knowledge Pruning for Retrieval-Augmented Generation: A Benchmark and Empirical StudyCode0
What does it take to get state of the art in simultaneous speech-to-speech translation?0
Understanding Multimodal Hallucination with Parameter-Free Representation AlignmentCode0
Towards Empathetic Conversational Recommender SystemsCode1
LLMs Prompted for Graphs: Hallucinations and Generative Capabilities0
Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data0
UserSumBench: A Benchmark Framework for Evaluating User Summarization Approaches0
Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path ReasoningCode1
VLM4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological ImagesCode0
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge DistillationCode3
Negation Blindness in Large Language Models: Unveiling the NO Syndrome in Image Generation0
Measuring text summarization factuality using atomic facts entailment metrics in the context of retrieval augmented generation0
Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering0
Genetic Approach to Mitigate Hallucination in Generative IRCode0
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models0
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language ModelsCode1
Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning0
Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering0
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination DetectionCode1
MedDiT: A Knowledge-Controlled Diffusion Transformer Framework for Dynamic Medical Image Generation in Virtual Simulated Patient0
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference DataCode0
GRATR: Zero-Shot Evidence Graph Retrieval-Augmented Trustworthiness ReasoningCode0
RAG-Optimized Tibetan Tourism LLMs: Enhancing Accuracy and Personalization0
Towards Analyzing and Mitigating Sycophancy in Large Vision-Language Models0
Enhanced document retrieval with topic embeddings0
MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation0
CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs0
Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language ModelsCode1
Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions0
Graph Retrieval-Augmented Generation: A SurveyCode3
Plan with Code: Comparing approaches for robust NL to DSL generation0
CodeMirage: Hallucinations in Code Generated by Large Language Models0
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability0
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection0
SSL: A Self-similarity Loss for Improving Generative Image Super-resolutionCode2
Reference-free Hallucination Detection for Large Vision-Language Models0
Show:102550
← PrevPage 16 of 37Next →

No leaderboard results yet.