SOTAVerified

Hallucination

Papers

Showing 751800 of 1816 papers

TitleStatusHype
Verify when Uncertain: Beyond Self-Consistency in Black Box Hallucination Detection0
OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment0
Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning0
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination EvaluationCode0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models0
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis0
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models0
Can Your Uncertainty Scores Detect Hallucinated Entity?0
Valuable Hallucinations: Realizable Non-realistic Propositions0
Smoothing Out Hallucinations: Mitigating LLM Hallucination with Smoothed Knowledge Distillation0
A Survey of LLM-based Agents in Medicine: How far are we from Baymax?0
Enhancing RAG with Active Learning on Conversation Records: Reject Incapables and Answer Capables0
Elevating Legal LLM Responses: Harnessing Trainable Logical Structures and Semantic Knowledge with Legal ReasoningCode0
DeepSeek on a Trip: Inducing Targeted Visual Hallucinations via Representation Vulnerabilities0
Hallucination, Monofacts, and Miscalibration: An Empirical InvestigationCode0
Refine Knowledge of Large Language Models via Adaptive Contrastive Learning0
Hallucination Detection: A Probabilistic Framework Using Embeddings Distance Analysis0
Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language ModelsCode0
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasksCode0
ChallengeMe: An Adversarial Learning-enabled Text Summarization Framework0
Enhancing Hallucination Detection through Noise Injection0
Linear Correlation in LM's Compositional Generalization and HallucinationCode0
TruthFlow: Truthful LLM Generation via Representation Flow Correction0
A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs)0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Eliciting Language Model Behaviors with Investigator Agents0
MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation0
SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models0
Assessing the use of Diffusion models for motion artifact correction in brain MRI0
MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
Differentially Private Steering for Large Language Model AlignmentCode0
Few-Shot Optimized Framework for Hallucination Detection in Resource-Limited NLP Systems0
Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization0
Open-Source Retrieval Augmented Generation Framework for Retrieving Accurate Medication Insights from Formularies for African Healthcare Workers0
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis0
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Hallucinations Can Improve Large Language Models in Drug Discovery0
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language ModelsCode0
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF0
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Hallucination Mitigation using Agentic AI Natural Language-Based FrameworksCode0
Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models0
Show:102550
← PrevPage 16 of 37Next →

No leaderboard results yet.