SOTAVerified

Hallucination

Papers

Showing 181190 of 1816 papers

TitleStatusHype
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models0
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs0
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionCode1
Hallucination by Code Generation LLMs: Taxonomy, Benchmarks, Mitigation, and Challenges0
Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?0
An Automated Reinforcement Learning Reward Design Framework with Large Language Model for Cooperative Platoon Coordination0
Explanatory Summarization with Discourse-Driven Planning0
Uncertainty Quantification for Language Models: A Suite of Black-Box, White-Box, LLM Judge, and Ensemble ScorersCode5
Validating Network Protocol Parsers with Traceable RFC Document Interpretation0
Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction0
Show:102550
← PrevPage 19 of 182Next →

No leaderboard results yet.