SOTAVerified

Hallucination

Papers

Showing 701725 of 1816 papers

TitleStatusHype
Gradient-guided Attention Map Editing: Towards Efficient Contextual Hallucination Mitigation0
Seeing What's Not There: Spurious Correlation in Multimodal LLMs0
Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs0
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Benchmarking Chinese Medical LLMs: A Medbench-based Analysis of Performance Gaps and Hierarchical Optimization Strategies0
CtrlRAG: Black-box Adversarial Attacks Based on Masked Language Models in Retrieval-Augmented Language Generation0
Mitigating Hallucinations in YOLO-based Object Detection Models: A Revisit to Out-of-Distribution Detection0
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward ModelsCode0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs0
Maximum Hallucination Standards for Domain-Specific Large Language Models0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model CompressionCode0
See What You Are Told: Visual Attention Sink in Large Multimodal Models0
DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models0
Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation0
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias0
Shakespearean Sparks: The Dance of Hallucination and Creativity in LLMs' Decoding LayersCode0
MCiteBench: A Multimodal Benchmark for Generating Text with CitationsCode0
SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs0
Adaptively profiling models with task elicitation0
Evaluating LLMs' Assessment of Mixed-Context Hallucination Through the Lens of SummarizationCode0
Show:102550
← PrevPage 29 of 73Next →

No leaderboard results yet.