SOTAVerified

Hallucination

Papers

Showing 15511600 of 1816 papers

TitleStatusHype
HOB-CNN: Hallucination of Occluded Branches with a Convolutional Neural Network for 2D Fruit Trees0
HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation0
Honest AI: Fine-Tuning "Small" Language Models to Say "I Don't Know", and Reducing Hallucination in RAG0
How to Build an AI Tutor That Can Adapt to Any Course Using Knowledge Graph-Enhanced Retrieval-Augmented Generation (KG-RAG)0
How to Detect and Defeat Molecular Mirage: A Metric-Driven Benchmark for Hallucination in LLM-based Molecular Comprehension0
How to Explore with Belief: State Entropy Maximization in POMDPs0
H-POPE: Hierarchical Polling-based Probing Evaluation of Hallucinations in Large Vision-Language Models0
Hybrid-RACA: Hybrid Retrieval-Augmented Composition Assistance for Real-time Text Prediction0
Hydra: An Agentic Reasoning Approach for Enhancing Adversarial Robustness and Mitigating Hallucinations in Vision-Language Models0
Hyperbolic Learning with Synthetic Captions for Open-World Detection0
ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models0
Identity-Aware Deep Face Hallucination via Adversarial Face Verification0
Identity-Preserving Pose-Robust Face Hallucination Through Face Subspace Prior0
IERL: Interpretable Ensemble Representation Learning -- Combining CrowdSourced Knowledge and Distributed Semantic Representations0
IllusionBench: A Large-scale and Comprehensive Benchmark for Visual Illusion Understanding in Vision-Language Models0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
(Im)possibility of Automated Hallucination Detection in Large Language Models0
Improbable Bigrams Expose Vulnerabilities of Incomplete Tokens in Byte-Level Tokenizers0
Improved Beam Search for Hallucination Mitigation in Abstractive Summarization0
Improved Single Camera BEV Perception Using Multi-Camera Training0
Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation0
Improving Factuality with Explicit Working Memory0
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection0
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models0
Improving Reliability and Explainability of Medical Question Answering through Atomic Fact Checking in Retrieval-Augmented LLMs0
Improving RNN-Transducers with Acoustic LookAhead0
Improving Scientific Hypothesis Generation with Knowledge Grounded Large Language Models0
Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning0
Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification0
Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text0
Incremental Scene Synthesis0
Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing Things0
Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue0
Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG0
Insights from Verification: Training a Verilog Generation LLM with Reinforcement Learning with Testbench Feedback0
Insights into Classifying and Mitigating LLMs' Hallucinations0
Instance-level Facial Attributes Transfer with Geometry-Aware Flow0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering0
InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States0
Interpretable Zero-shot Learning with Infinite Class Concepts0
Interpreting and Mitigating Hallucination in MLLMs through Multi-agent Debate0
Invar-RAG: Invariant LLM-aligned Retrieval for Better Generation0
Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation0
Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models0
IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing0
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection0
Is Your Text-to-Image Model Robust to Caption Noise?0
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning0
It's About Time: Incorporating Temporality in Retrieval Augmented Language Models0
Show:102550
← PrevPage 32 of 37Next →

No leaderboard results yet.