SOTAVerified

Hallucination

Papers

Showing 12011250 of 1816 papers

TitleStatusHype
ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way0
Calibrated Language Models Must Hallucinate0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
Calm-Whisper: Reduce Whisper Hallucination On Non-Speech By Calming Crazy Heads Down0
Can a Hallucinating Model help in Reducing Human "Hallucination"?0
Can a Transformer Pass the Wug Test? Tuning Copying Bias in Neural Morphological Inflection Models0
Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations?0
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering0
Can Large Language Models Play Games? A Case Study of A Self-Play Approach0
Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning0
Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?0
Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study0
Can Structured Data Reduce Epistemic Uncertainty?0
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation0
Can Your Uncertainty Scores Detect Hallucinated Entity?0
Capturing AI's Attention: Physics of Repetition, Hallucination, Bias and Beyond0
CARBD-Ko: A Contextually Annotated Review Benchmark Dataset for Aspect-Level Sentiment Classification in Korean0
CarbonChat: Large Language Model-Based Corporate Carbon Emission Analysis and Climate Knowledge Q&A System0
Make VLM Recognize Visual Hallucination on Cartoon Character Image with Pose Information0
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
CBSiMT: Mitigating Hallucination in Simultaneous Machine Translation with Weighted Prefix-to-Prefix Training0
A Multitask Training Approach to Enhance Whisper with Contextual Biasing and Open-Vocabulary Keyword Spotting0
CCNU at SemEval-2025 Task 3: Leveraging Internal and External Knowledge of Large Language Models for Multilingual Hallucination Annotation0
CC-OCR: A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy0
CerfGAN: A Compact, Effective, Robust, and Fast Model for Unsupervised Multi-Domain Image-to-Image Translation0
CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding0
Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models0
Chain-of-Programming (CoP) : Empowering Large Language Models for Geospatial Code Generation0
Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems0
ChallengeMe: An Adversarial Learning-enabled Text Summarization Framework0
Challenges in Domain-Specific Abstractive Summarization and How to Overcome them0
Challenges of Large Language Models for Mental Health Counseling0
Chaos with Keywords: Exposing Large Language Models Sycophantic Hallucination to Misleading Keywords and Evaluating Defense Strategies0
CHARP: Conversation History AwaReness Probing for Knowledge-grounded Dialogue Systems0
ChatASU: Evoking LLM's Reflexion to Truly Understand Aspect Sentiment in Dialogues0
ChatGPT (Feb 13 Version) is a Chinese Room0
chatClimate: Grounding Conversational AI in Climate Science0
ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models0
CHIME: Conditional Hallucination and Integrated Multi-scale Enhancement for Time Series Diffusion Model0
CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning0
Classification-Based Automatic HDL Code Generation Using LLMs0
CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection0
CleAR: Robust Context-Guided Generative Lighting Estimation for Mobile Augmented Reality0
CLIP-Cluster: CLIP-Guided Attribute Hallucination for Face Clustering0
CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs0
CLUE: Concept-Level Uncertainty Estimation for Large Language Models0
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models0
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models0
Code Hallucination0
CodeMirage: Hallucinations in Code Generated by Large Language Models0
Show:102550
← PrevPage 25 of 37Next →

No leaderboard results yet.