SOTAVerified

Hallucination

Papers

Showing 751760 of 1816 papers

TitleStatusHype
Safety challenges of AI in medicine in the era of large language models0
MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
LLMs Will Always Hallucinate, and We Need to Live With This0
Detecting Buggy Contracts via Smart Testing0
Generating Faithful and Salient Text from Multimodal DataCode0
Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering0
Vietnamese Legal Information Retrieval in Question-Answering System0
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned ModelsCode0
CLUE: Concept-Level Uncertainty Estimation for Large Language Models0
Show:102550
← PrevPage 76 of 182Next →

No leaderboard results yet.