SOTAVerified

Hallucination

Papers

Showing 17011710 of 1816 papers

TitleStatusHype
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMsCode0
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language ModelsCode0
Automating Feedback Analysis in Surgical Training: Detection, Categorization, and AssessmentCode0
Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated TextsCode0
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified RobustnessCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
Prioritizing Image-Related Tokens Enhances Vision-Language Pre-TrainingCode0
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMsCode0
How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?Code0
Show:102550
← PrevPage 171 of 182Next →

No leaderboard results yet.