SOTAVerified

Hallucination

Papers

Showing 576600 of 1816 papers

TitleStatusHype
Correction with Backtracking Reduces Hallucination in SummarizationCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits CalibrationCode0
Conversational Gold: Evaluating Personalized Conversational Search System using Gold NuggetsCode0
LLMs and Memorization: On Quality and Specificity of Copyright ComplianceCode0
Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting FrameworkCode0
Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced OptimizationCode0
LLM Inference Enhanced by External Knowledge: A SurveyCode0
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and MitigationCode0
LLM Internal States Reveal Hallucination Risk Faced With a QueryCode0
LLM-based Query Expansion Fails for Unfamiliar and Ambiguous QueriesCode0
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language ModelsCode0
Linear Correlation in LM's Compositional Generalization and HallucinationCode0
Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge GraphsCode0
Leveraging Pretrained Models for Automatic Summarization of Doctor-Patient ConversationsCode0
Learning with privileged information via adversarial discriminative modality distillationCode0
Confidence Estimation for LLM-Based Dialogue State TrackingCode0
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified RobustnessCode0
Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to GiantCode0
Learning on LLM Output Signatures for gray-box LLM Behavior AnalysisCode0
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP ConceptsCode0
Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language ModelsCode0
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak AttacksCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
Show:102550
← PrevPage 24 of 73Next →

No leaderboard results yet.