SOTAVerified

Hallucination

Papers

Showing 701750 of 1816 papers

TitleStatusHype
Seeing What's Not There: Spurious Correlation in Multimodal LLMs0
Gradient-guided Attention Map Editing: Towards Efficient Contextual Hallucination Mitigation0
Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs0
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation0
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward ModelsCode0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Mitigating Hallucinations in YOLO-based Object Detection Models: A Revisit to Out-of-Distribution Detection0
Benchmarking Chinese Medical LLMs: A Medbench-based Analysis of Performance Gaps and Hierarchical Optimization Strategies0
CtrlRAG: Black-box Adversarial Attacks Based on Masked Language Models in Retrieval-Augmented Language Generation0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
Maximum Hallucination Standards for Domain-Specific Large Language Models0
SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias0
DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models0
Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation0
See What You Are Told: Visual Attention Sink in Large Multimodal Models0
Shakespearean Sparks: The Dance of Hallucination and Creativity in LLMs' Decoding LayersCode0
SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs0
MCiteBench: A Multimodal Benchmark for Generating Text with CitationsCode0
Adaptively profiling models with task elicitation0
Evaluating LLMs' Assessment of Mixed-Context Hallucination Through the Lens of SummarizationCode0
LLM-Advisor: An LLM Benchmark for Cost-efficient Path Planning across Multiple Terrains0
Tackling Hallucination from Conditional Models for Medical Image Reconstruction with DynamicDPS0
Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation0
NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and Related Observable Overgeneration Text Spans with Modified RefChecker and Modified SeflCheckGPTCode0
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies0
Steer LLM Latents for Hallucination Detection0
U-NIAH: Unified RAG and LLM Evaluation for Long Context Needle-In-A-HaystackCode0
UniFa: A unified feature hallucination framework for any-shot object detection0
Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Vision-Encoders (Already) Know What They See: Mitigating Object Hallucination via Simple Fine-Grained CLIPScoreCode0
On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation0
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents0
Exploring the Generalizability of Factual Hallucination Mitigation via Enhancing Precise Knowledge Utilization0
BRIDO: Bringing Democratic Order to Abstractive Summarization0
Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models0
`Generalization is hallucination' through the lens of tensor completions0
Exploring Causes and Mitigation of Hallucinations in Large Vision Language Models0
Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models0
The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination0
ZiGong 1.0: A Large Language Model for Financial Credit0
The Role of Background Information in Reducing Object Hallucination in Vision-Language Models: Insights from Cutoff API Prompting0
Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs0
Hallucination Detection in Large Language Models with Metamorphic Relations0
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models0
Show:102550
← PrevPage 15 of 37Next →

No leaderboard results yet.