SOTAVerified

Hallucination

Papers

Showing 16711680 of 1816 papers

TitleStatusHype
Logical Consistency of Large Language Models in Fact-checking0
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models0
Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models0
Look Within, Why LLMs Hallucinate: A Causal Perspective0
Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models0
Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused0
Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training0
LR-to-HR Face Hallucination with an Adversarial Progressive Attribute-Induced Network0
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost0
Lynx: An Open Source Hallucination Evaluation Model0
Show:102550
← PrevPage 168 of 182Next →

No leaderboard results yet.