SOTAVerified

Hallucination

Papers

Showing 776800 of 1816 papers

TitleStatusHype
TruthFlow: Truthful LLM Generation via Representation Flow Correction0
A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs)0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Eliciting Language Model Behaviors with Investigator Agents0
MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation0
SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models0
Assessing the use of Diffusion models for motion artifact correction in brain MRI0
MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction0
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
Differentially Private Steering for Large Language Model AlignmentCode0
Few-Shot Optimized Framework for Hallucination Detection in Resource-Limited NLP Systems0
Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization0
Open-Source Retrieval Augmented Generation Framework for Retrieving Accurate Medication Insights from Formularies for African Healthcare Workers0
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis0
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink0
Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Hallucinations Can Improve Large Language Models in Drug Discovery0
Comprehensive Modeling and Question Answering of Cancer Clinical Practice Guidelines using LLMs0
OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for Small-Large Language ModelsCode0
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF0
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering0
Hallucination Mitigation using Agentic AI Natural Language-Based FrameworksCode0
Attention-guided Self-reflection for Zero-shot Hallucination Detection in Large Language Models0
Show:102550
← PrevPage 32 of 73Next →

No leaderboard results yet.