SOTAVerified

Hallucination

Papers

Showing 11711180 of 1816 papers

TitleStatusHype
Collaborative decoding of critical tokens for boosting factuality of large language models0
All in an Aggregated Image for In-Image LearningCode1
Editing Factual Knowledge and Explanatory Ability of Medical Large Language ModelsCode0
Securing Reliability: A Brief Overview on Enhancing In-Context Learning for Foundation Models0
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful SpaceCode2
Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM ResponsesCode0
Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMsCode0
Rethinking Software Engineering in the Foundation Model Era: A Curated Catalogue of Challenges in the Development of Trustworthy FMware0
Show:102550
← PrevPage 118 of 182Next →

No leaderboard results yet.