SOTAVerified

Hallucination

Papers

Showing 15511600 of 1816 papers

TitleStatusHype
Salient Information Prompting to Steer Content in Prompt-based Abstractive SummarizationCode0
Low to High Dimensional Modality Hallucination using Aggregated Fields of ViewCode0
Modality Distillation with Multiple Stream Networks for Action RecognitionCode0
Scalable Model Editing via Customized Expert NetworksCode0
LongHalQA: Long-Context Hallucination Evaluation for MultiModal Large Language ModelsCode0
The troublesome kernel -- On hallucinations, no free lunches and the accuracy-stability trade-off in inverse problemsCode0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
GAPO: Learning Preferential Prompt through Generative Adversarial Policy OptimizationCode0
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical ReasoningCode0
Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge GraphsCode0
Investigating Memorization of Conspiracy Theories in Text GenerationCode0
Multi-FAct: Assessing Factuality of Multilingual LLMs using FActScoreCode0
MultiHal: Multilingual Dataset for Knowledge-Graph Grounded Evaluation of LLM HallucinationsCode0
ScVLM: Enhancing Vision-Language Model for Safety-Critical Event UnderstandingCode0
GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language ModelsCode0
SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive DecodingCode0
LLMs and Memorization: On Quality and Specificity of Copyright ComplianceCode0
U-NIAH: Unified RAG and LLM Evaluation for Long Context Needle-In-A-HaystackCode0
LLM Internal States Reveal Hallucination Risk Faced With a QueryCode0
LLM Inference Enhanced by External Knowledge: A SurveyCode0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward ModelsCode0
Multimodal Survival Modeling in the Age of Foundation ModelsCode0
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and MitigationCode0
From Single to Multi: How LLMs Hallucinate in Multi-Document SummarizationCode0
Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt EngineeringCode0
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language TasksCode0
Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge-Adapted CaptionsCode0
DecoPrompt : Decoding Prompts Reduces Hallucinations when Large Language Models Meet False PremisesCode0
From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic DataCode0
Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?Code0
Unified Triplet-Level Hallucination Evaluation for Large Vision-Language ModelsCode0
Navigating Noisy Feedback: Enhancing Reinforcement Learning with Error-Prone Language ModelsCode0
Brain MRI Image Super Resolution using Phase Stretch Transform and Transfer LearningCode0
NCL-UoR at SemEval-2025 Task 3: Detecting Multilingual Hallucination and Related Observable Overgeneration Text Spans with Modified RefChecker and Modified SeflCheckGPTCode0
DDFAV: Remote Sensing Large Vision Language Models Dataset and Evaluation BenchmarkCode0
VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty EstimationCode0
Data-Centric Human Preference Optimization with RationalesCode0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM DecodingCode0
LLM-based Query Expansion Fails for Unfamiliar and Ambiguous QueriesCode0
AI-Enhanced Cognitive Behavioral Therapy: Deep Learning and Large Language Models for Extracting Cognitive Pathways from Social Media TextsCode0
Walk&Retrieve: Simple Yet Effective Zero-shot Retrieval-Augmented Generation via Knowledge Graph WalksCode0
Fine-tuning Large Language Models for Improving Factuality in Legal Question AnsweringCode0
NGEP: A Graph-based Event Planning Framework for Story GenerationCode0
Linear Correlation in LM's Compositional Generalization and HallucinationCode0
Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language ModelsCode0
NoiseBoost: Alleviating Hallucination with Noise Perturbation for Multimodal Large Language ModelsCode0
Self-Consistent Decoding for More Factual Open ResponsesCode0
LightHouse: A Survey of AGI HallucinationCode0
Show:102550
← PrevPage 32 of 37Next →

No leaderboard results yet.