SOTAVerified

Hallucination

Papers

Showing 601610 of 1816 papers

TitleStatusHype
Learning on LLM Output Signatures for gray-box LLM Behavior AnalysisCode0
Learning Fine-grained Domain Generalization via Hyperbolic State Space HallucinationCode0
Leveraging Pretrained Models for Automatic Summarization of Doctor-Patient ConversationsCode0
LLM Internal States Reveal Hallucination Risk Faced With a QueryCode0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak AttacksCode0
A Comparative Study on Language Models for Task-Oriented Dialogue SystemsCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generationCode0
Multi-Source Knowledge Pruning for Retrieval-Augmented Generation: A Benchmark and Empirical StudyCode0
Show:102550
← PrevPage 61 of 182Next →

No leaderboard results yet.