SOTAVerified

Hallucination

Papers

Showing 426450 of 1816 papers

TitleStatusHype
A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy0
ChartInsighter: An Approach for Mitigating Hallucination in Time-series Chart Summary Generation with A Benchmark DatasetCode1
Knowledge Graph-based Retrieval-Augmented Generation for Schema MatchingCode1
Multimodal LLMs Can Reason about Aesthetics in Zero-ShotCode1
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them0
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
GPT as a Monte Carlo Language Tree: A Probabilistic Perspective0
Fine-tuning Large Language Models for Improving Factuality in Legal Question AnsweringCode0
VASparse: Towards Efficient Visual Hallucination Mitigation for Large Vision-Language Model via Visual-Aware SparsificationCode1
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare0
Hermit Kingdom Through the Lens of Multiple Perspectives: A Case Study of LLM Hallucination on North Korea0
ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition BenchmarkCode1
Seeing with Partial Certainty: Conformal Prediction for Robotic Scene Recognition in Built Environments0
Feedback-Driven Vision-Language Alignment with Minimal Human Supervision0
RAG-Check: Evaluating Multimodal Retrieval Augmented Generation Performance0
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models0
EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
Foundations of GenIR0
CHAIR -- Classifier of Hallucination as ImproverCode0
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
CarbonChat: Large Language Model-Based Corporate Carbon Emission Analysis and Climate Knowledge Q&A System0
Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration DecodingCode1
LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries0
Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection0
Show:102550
← PrevPage 18 of 73Next →

No leaderboard results yet.