SOTAVerified

Hallucination

Papers

Showing 801850 of 1816 papers

TitleStatusHype
EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models0
Dynamic In-Context Learning from Nearest Neighbors for Bundle Generation0
NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning0
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination0
Dual-View Data Hallucination with Semantic Relation Guidance for Few-Shot Image Recognition0
Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation0
Bridging LMS and Generative AI: Dynamic Course Content Integration (DCCI) for Connecting LLMs to Course Content -- The Ask ME Assistant0
An Investigation of Monotonic Transducers for Large-Scale Automatic Speech Recognition0
DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models0
DREAM: On hallucinations in AI-generated content for nuclear medicine imaging0
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination0
BRAVE: Broadening the visual encoding of vision-language models0
Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models0
Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning?0
ADeLA: Automatic Dense Labeling With Attention for Viewpoint Shift in Semantic Segmentation0
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States0
Blind Image Super-Resolution with Spatial Context Hallucination0
Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
An Evolutionary Large Language Model for Hallucination Mitigation0
Do Androids Know They're Only Dreaming of Electric Sheep?0
DNR Bench: Benchmarking Over-Reasoning in Reasoning LLMs0
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models0
An evaluation of template and ML-based generation of user-readable text from a knowledge graph0
3D human tongue reconstruction from single "in-the-wild" images0
MAO: A Framework for Process Model Generation with Multi-Agent Orchestration0
Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models0
Diverging Towards Hallucination: Detection of Failures in Vision-Language Models via Multi-token Aggregation0
DiTSE: High-Fidelity Generative Speech Enhancement via Latent Diffusion Transformers0
BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Distillation of encoder-decoder transformers for sequence labelling0
BibSonomy Meets ChatLLMs for Publication Management: From Chat to Publication Management: Organizing your related work using BibSonomy & LLMs0
DiffMAC: Diffusion Manifold Hallucination Correction for High Generalization Blind Face Restoration0
Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis0
An End-to-End Depth-Based Pipeline for Selfie Image Rectification0
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
Beyond the Black Box: Interpretability of LLMs in Finance0
An Automated Reinforcement Learning Reward Design Framework with Large Language Model for Cooperative Platoon Coordination0
DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models0
Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection & Grounding in VLMs0
Anatomy of Industrial Scale Multilingual ASR0
A Debate-Driven Experiment on LLM Hallucinations and Accuracy0
Developing a Reliable, Fast, General-Purpose Hallucination Detection and Mitigation Service0
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop0
Detection and Mitigation of Hallucination in Large Reasoning Models: A Mechanistic Perspective0
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering0
Detecting LLM Hallucination Through Layer-wise Information Deficiency: Analysis of Unanswerable Questions and Ambiguous Prompts0
Show:102550
← PrevPage 17 of 37Next →

No leaderboard results yet.