SOTAVerified

Hallucination

Papers

Showing 11411150 of 1816 papers

TitleStatusHype
Are Reasoning Models More Prone to Hallucination?0
A review of faithfulness metrics for hallucination assessment in Large Language Models0
ARGUS: Hallucination and Omission Evaluation in Video-LLMs0
ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature0
ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM0
A Schema-Guided Reason-while-Retrieve framework for Reasoning on Scene Graphs with Large-Language-Models (LLMs)0
A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation0
Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucination0
Aspect-Based Summarization with Self-Aspect Retrieval Enhanced Generation0
Assessing the use of Diffusion models for motion artifact correction in brain MRI0
Show:102550
← PrevPage 115 of 182Next →

No leaderboard results yet.