SOTAVerified

Hallucination

Papers

Showing 12211230 of 1816 papers

TitleStatusHype
Large Language Model with Graph Convolution for Recommendation0
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop0
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference AlignmentCode2
Visually Dehallucinative Instruction GenerationCode0
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance0
A Systematic Review of Data-to-Text NLG0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question AnsweringCode4
Show:102550
← PrevPage 123 of 182Next →

No leaderboard results yet.