SOTAVerified

Hallucination

Papers

Showing 301350 of 1816 papers

TitleStatusHype
Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs0
Seeing What's Not There: Spurious Correlation in Multimodal LLMs0
Gradient-guided Attention Map Editing: Towards Efficient Contextual Hallucination Mitigation0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Benchmarking Chinese Medical LLMs: A Medbench-based Analysis of Performance Gaps and Hierarchical Optimization Strategies0
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward ModelsCode0
CtrlRAG: Black-box Adversarial Attacks Based on Masked Language Models in Retrieval-Augmented Language Generation0
Mitigating Hallucinations in YOLO-based Object Detection Models: A Revisit to Out-of-Distribution Detection0
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs0
Maximum Hallucination Standards for Domain-Specific Large Language Models0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation0
DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models0
Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language ModelsCode11
See What You Are Told: Visual Attention Sink in Large Multimodal Models0
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias0
Shakespearean Sparks: The Dance of Hallucination and Creativity in LLMs' Decoding LayersCode0
SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs0
MCiteBench: A Multimodal Benchmark for Generating Text with CitationsCode0
WMNav: Integrating Vision-Language Models into World Models for Object Goal NavigationCode2
Adaptively profiling models with task elicitation0
Evaluating LLMs' Assessment of Mixed-Context Hallucination Through the Lens of SummarizationCode0
LLM-Advisor: An LLM Benchmark for Cost-efficient Path Planning across Multiple Terrains0
Tackling Hallucination from Conditional Models for Medical Image Reconstruction with DynamicDPS0
Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation0
NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and Related Observable Overgeneration Text Spans with Modified RefChecker and Modified SeflCheckGPTCode0
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies0
Steer LLM Latents for Hallucination Detection0
UniFa: A unified feature hallucination framework for any-shot object detection0
U-NIAH: Unified RAG and LLM Evaluation for Long Context Needle-In-A-HaystackCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Towards General Visual-Linguistic Face Forgery Detection(V2)Code1
Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs0
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
One-for-More: Continual Diffusion Model for Anomaly DetectionCode2
ProAPO: Progressively Automatic Prompt Optimization for Visual ClassificationCode1
Vision-Encoders (Already) Know What They See: Mitigating Object Hallucination via Simple Fine-Grained CLIPScoreCode0
Exploring the Generalizability of Factual Hallucination Mitigation via Enhancing Precise Knowledge Utilization0
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
BRIDO: Bringing Democratic Order to Abstractive Summarization0
Verdict: A Library for Scaling Judge-Time ComputeCode3
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
Hallucination Detection in LLMs Using Spectral Features of Attention MapsCode1
Exploring Causes and Mitigation of Hallucinations in Large Vision Language Models0
Show:102550
← PrevPage 7 of 37Next →

No leaderboard results yet.