SOTAVerified

Hallucination

Papers

Showing 10911100 of 1816 papers

TitleStatusHype
Grounded in Context: Retrieval-Based Method for Hallucination Detection0
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation0
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs0
GTP-4o: Modality-prompted Heterogeneous Graph Learning for Omni-modal Biomedical Representation0
GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling0
Guiding Clinical Reasoning with Large Language Models via Knowledge Seeds0
An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models0
Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing0
GUMsley: Evaluating Entity Salience in Summarization for 12 English Genres0
Hal-Eval: A Universal and Fine-grained Hallucination Evaluation Framework for Large Vision Language Models0
Show:102550
← PrevPage 110 of 182Next →

No leaderboard results yet.