SOTAVerified

Hallucination

Papers

Showing 826850 of 1816 papers

TitleStatusHype
3D human tongue reconstruction from single "in-the-wild" images0
MAO: A Framework for Process Model Generation with Multi-Agent Orchestration0
Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models0
Diverging Towards Hallucination: Detection of Failures in Vision-Language Models via Multi-token Aggregation0
DiTSE: High-Fidelity Generative Speech Enhancement via Latent Diffusion Transformers0
BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models0
Distilling Desired Comments for Enhanced Code Review with Large Language Models0
Distillation of encoder-decoder transformers for sequence labelling0
BibSonomy Meets ChatLLMs for Publication Management: From Chat to Publication Management: Organizing your related work using BibSonomy & LLMs0
DiffMAC: Diffusion Manifold Hallucination Correction for High Generalization Blind Face Restoration0
Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis0
An End-to-End Depth-Based Pipeline for Selfie Image Rectification0
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
Beyond the Black Box: Interpretability of LLMs in Finance0
An Automated Reinforcement Learning Reward Design Framework with Large Language Model for Cooperative Platoon Coordination0
DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models0
Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection & Grounding in VLMs0
Anatomy of Industrial Scale Multilingual ASR0
A Debate-Driven Experiment on LLM Hallucinations and Accuracy0
Developing a Reliable, Fast, General-Purpose Hallucination Detection and Mitigation Service0
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop0
Detection and Mitigation of Hallucination in Large Reasoning Models: A Mechanistic Perspective0
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering0
Detecting LLM Hallucination Through Layer-wise Information Deficiency: Analysis of Unanswerable Questions and Ambiguous Prompts0
Show:102550
← PrevPage 34 of 73Next →

No leaderboard results yet.