SOTAVerified

Hallucination

Papers

Showing 451500 of 1816 papers

TitleStatusHype
DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning0
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding0
The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems0
GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language ModelsCode0
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation0
Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits CalibrationCode0
Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models0
Feature Hallucination for Self-supervised Action Recognition0
HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models0
Robust Instant Policy: Leveraging Student's t-Regression Model for Robust In-context Imitation Learning of Robot Manipulation0
ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM0
Abstract Meaning Representation for Hospital Discharge SummarizationCode0
DREAM: On hallucinations in AI-generated content for nuclear medicine imaging0
VL-GenRM: Enhancing Vision-Language Verification via Vision Experts and Iterative Training0
Stress-Testing Multimodal Foundation Models for Crystallographic ReasoningCode0
HKD4VLM: A Progressive Hybrid Knowledge Distillation Framework for Robust Multimodal Hallucination and Factuality Detection in VLMs0
A Regret Perspective on Online Selective Generation0
Second Order State Hallucinations for Adversarial Attack Mitigation in Formation Control of Multi-Agent Systems0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers0
Text-Aware Image Restoration with Diffusion Models0
Attention Head Embeddings with Trainable Deep Kernels for Hallucination Detection in LLMs0
Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMsCode0
RHealthTwin: Towards Responsible and Multimodal Digital Twins for Personalized Well-being0
SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive DecodingCode0
MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs0
Conservative Bias in Large Language Models: Measuring Relation Predictions0
Uncertainty-o: One Model-agnostic Framework for Unveiling Uncertainty in Large Multimodal Models0
ARGUS: Hallucination and Omission Evaluation in Video-LLMs0
Reducing Object Hallucination in Large Audio-Language Models via Audio-Aware Decoding0
Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning0
QuantMCP: Grounding Large Language Models in Verifiable Financial Reality0
When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models0
CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection0
GOLFer: Smaller LM-Generated Documents Hallucination Filter & Combiner for Query Expansion in Information RetrievalCode0
Magic Mushroom: A Customizable Benchmark for Fine-grained Analysis of Retrieval Noise Erosion in RAG Systems0
On the Fundamental Impossibility of Hallucination Control in Large Language Models0
CHIME: Conditional Hallucination and Integrated Multi-scale Enhancement for Time Series Diffusion Model0
Machine Mirages: Defining the Undefined0
Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation0
Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine0
TRUST -- Transformer-Driven U-Net for Sparse Target Recovery0
Measuring Faithfulness and Abstention: An Automated Pipeline for Evaluating LLM-Generated 3-ply Case-Based Legal Arguments0
Generative AI and Organizational Structure in the Knowledge Economy0
Improving Reliability and Explainability of Medical Question Answering through Atomic Fact Checking in Retrieval-Augmented LLMs0
BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models0
MIRAGE: Assessing Hallucination in Multimodal Reasoning Chains of MLLM0
An AI-powered Knowledge Hub for Potato Functional Genomics0
LLM Inference Enhanced by External Knowledge: A SurveyCode0
Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation0
Show:102550
← PrevPage 10 of 37Next →

No leaderboard results yet.