SOTAVerified

Hallucination

Papers

Showing 14111420 of 1816 papers

TitleStatusHype
Large Language Model with Graph Convolution for Recommendation0
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop0
Visually Dehallucinative Instruction GenerationCode0
A Systematic Review of Data-to-Text NLG0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding0
ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward ModelingCode0
An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models0
The Instinctive Bias: Spurious Images lead to Illusion in MLLMsCode0
Show:102550
← PrevPage 142 of 182Next →

No leaderboard results yet.