SOTAVerified

Hallucination

Papers

Showing 726750 of 1816 papers

TitleStatusHype
LLM-Advisor: An LLM Benchmark for Cost-efficient Path Planning across Multiple Terrains0
Tackling Hallucination from Conditional Models for Medical Image Reconstruction with DynamicDPS0
Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation0
NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and Related Observable Overgeneration Text Spans with Modified RefChecker and Modified SeflCheckGPTCode0
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies0
Steer LLM Latents for Hallucination Detection0
U-NIAH: Unified RAG and LLM Evaluation for Long Context Needle-In-A-HaystackCode0
UniFa: A unified feature hallucination framework for any-shot object detection0
Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Vision-Encoders (Already) Know What They See: Mitigating Object Hallucination via Simple Fine-Grained CLIPScoreCode0
On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
Exploring the Generalizability of Factual Hallucination Mitigation via Enhancing Precise Knowledge Utilization0
BRIDO: Bringing Democratic Order to Abstractive Summarization0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
`Generalization is hallucination' through the lens of tensor completions0
Exploring Causes and Mitigation of Hallucinations in Large Vision Language Models0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination0
ZiGong 1.0: A Large Language Model for Financial Credit0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs0
Hallucination Detection in Large Language Models with Metamorphic Relations0
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models0
Show:102550
← PrevPage 30 of 73Next →

No leaderboard results yet.