SOTAVerified

Hallucination

Papers

Showing 751800 of 1816 papers

TitleStatusHype
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
LightHouse: A Survey of AGI HallucinationCode0
Enhancing Retrieval Processes for Language Generation with Augmented Queries0
Enhancing RAG with Active Learning on Conversation Records: Reject Incapables and Answer Capables0
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation0
From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information0
Can Structured Data Reduce Epistemic Uncertainty?0
Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models0
Enhancing Mathematical Reasoning in Large Language Models with Self-Consistency-Based Hallucination Detection0
Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study0
Enhancing LLM Generation with Knowledge Hypergraph for Evidence-Based Medicine0
Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?0
Applying RLAIF for Code Generation with API-usage in Lightweight LLMs0
Enhancing Hallucination Detection through Noise Injection0
Enhancing Guardrails for Safe and Secure Healthcare AI0
Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models0
Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning0
Applications of Large Language Model Reasoning in Feature Generation0
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation0
Enhanced document retrieval with topic embeddings0
Can Large Language Models Play Games? A Case Study of A Self-Play Approach0
Endowing Embodied Agents with Spatial Reasoning Capabilities for Vision-and-Language Navigation0
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering0
A Perspective for Adapting Generalist AI to Specialized Medical AI Applications and Their Challenges0
AGA-GAN: Attribute Guided Attention Generative Adversarial Network with U-Net for Face Hallucination0
Enabling Explainable Recommendation in E-commerce with LLM-powered Product Knowledge Graph0
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment0
Emergence and dynamics of delusions and hallucinations across stages in early psychosis0
Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations?0
Can a Transformer Pass the Wug Test? Tuning Copying Bias in Neural Morphological Inflection Models0
Eliciting Language Model Behaviors with Investigator Agents0
Can a Hallucinating Model help in Reducing Human "Hallucination"?0
Anticipation-free Training for Simultaneous Translation0
Calm-Whisper: Reduce Whisper Hallucination On Non-Speech By Calming Crazy Heads Down0
EF-LLM: Energy Forecasting LLM with AI-assisted Automation, Enhanced Sparse Prediction, Hallucination Detection0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach0
Efficient Non-Parametric Uncertainty Quantification for Black-Box Large Language Models and Decision Planning0
Efficient Contrastive Decoding with Probabilistic Hallucination Detection - Mitigating Hallucinations in Large Vision Language Models -0
Calibrated Language Models Must Hallucinate0
A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery0
Adversarial Discriminative Heterogeneous Face Recognition0
Abstraction ou hallucination ? État des lieux et évaluation du risque pour les modèles de génération de résumés automatiques de type séquence-à-séquence (Abstraction or Hallucination ? Status and Risk assessment for sequence-to-sequence Automatic)0
Efficient and robust 3D blind harmonization for large domain gaps0
Effectiveness Assessment of Recent Large Vision-Language Models0
ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way0
ECKGBench: Benchmarking Large Language Models in E-commerce Leveraging Knowledge Graph0
BRIDO: Bringing Democratic Order to Abstractive Summarization0
EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens0
Show:102550
← PrevPage 16 of 37Next →

No leaderboard results yet.