SOTAVerified

Hallucination

Papers

Showing 251300 of 1816 papers

TitleStatusHype
RARE: Retrieval-Augmented Reasoning ModelingCode2
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering0
Learning to Instruct for Visual Instruction Tuning0
Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best?0
Alleviating LLM-based Generative Retrieval Hallucination in Alipay Search0
Tricking Retrievers with Influential Tokens: An Efficient Black-Box Corpus Poisoning Attack0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Mitigating Low-Level Visual Hallucinations Requires Self-Awareness: Database, Model and Training Strategy0
GAPO: Learning Preferential Prompt through Generative Adversarial Policy OptimizationCode0
TN-Eval: Rubric and Evaluation Protocols for Measuring the Quality of Behavioral Therapy Notes0
KSHSeek: Data-Driven Approaches to Mitigating and Detecting Knowledge-Shortcut Hallucinations in Generative Models0
LRSCLIP: A Vision-Language Foundation Model for Aligning Remote Sensing Image with Longer TextCode1
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive FinetuningCode1
HausaNLP at SemEval-2025 Task 3: Towards a Fine-Grained Model-Aware Hallucination Detection0
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
ShED-HD: A Shannon Entropy Distribution Framework for Lightweight Hallucination Detection on Edge Devices0
GeoBenchX: Benchmarking LLMs for Multistep Geospatial TasksCode1
good4cir: Generating Detailed Synthetic Captions for Composed Image Retrieval0
Judge Anything: MLLM as a Judge Across Any Modality0
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs0
ProDehaze: Prompting Diffusion Models Toward Faithful Image DehazingCode1
MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations0
Towards Lighter and Robust Evaluation for Retrieval Augmented GenerationCode0
DNR Bench: Benchmarking Over-Reasoning in Reasoning LLMs0
ECKGBench: Benchmarking Large Language Models in E-commerce Leveraging Knowledge Graph0
Poly-FEVER: A Multilingual Fact Verification Benchmark for Hallucination Detection in Large Language Models0
R^2: A LLM Based Novel-to-Screenplay Generation Framework with Causal Plot Graphs0
MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models0
Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine0
From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models0
Learning on LLM Output Signatures for gray-box LLM Behavior AnalysisCode0
RAD: Retrieval-Augmented Decision-Making of Meta-Actions with Vision-Language Models in Autonomous Driving0
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
Grounded Chain-of-Thought for Multimodal Large Language ModelsCode1
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation0
Applications of Large Language Model Reasoning in Feature Generation0
RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration0
LLM Agents for Education: Advances and Applications0
AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generationCode0
Prompt Injection Detection and Mitigation via AI Multi-Agent NLP FrameworksCode0
Learning to Inference Adaptively for Multimodal Large Language Models0
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-InterventionCode1
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM DecodingCode0
Conversational Gold: Evaluating Personalized Conversational Search System using Gold NuggetsCode0
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection0
NVP-HRI: Zero Shot Natural Voice and Posture-based Human-Robot Interaction via Large Language ModelCode0
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation0
Gradient-guided Attention Map Editing: Towards Efficient Contextual Hallucination Mitigation0
Show:102550
← PrevPage 6 of 37Next →

No leaderboard results yet.