SOTAVerified

Hallucination

Papers

Showing 10011025 of 1816 papers

TitleStatusHype
LongHalQA: Long-Context Hallucination Evaluation for MultiModal Large Language ModelsCode0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Measuring the Inconsistency of Large Language Models in Preferential Ranking0
A Methodology for Evaluating RAG Systems: A Case Study On Configuration Dependency ValidationCode0
LatteCLIP: Unsupervised CLIP Fine-Tuning via LMM-Synthetic Texts0
PublicHearingBR: A Brazilian Portuguese Dataset of Public Hearing Transcripts for Summarization of Long Documents0
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering0
Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction TuningCode0
From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models0
FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning0
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models0
Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation0
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment0
AI-Enhanced Ethical Hacking: A Linux-Focused Experiment0
TLDR: Token-Level Detective Reward Model for Large Vision Language Models0
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination0
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation0
SAG: Style-Aligned Article Generation via Model Collaboration0
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs0
Salient Information Prompting to Steer Content in Prompt-based Abstractive SummarizationCode0
Characterizing Context Influence and Hallucination in SummarizationCode0
Show:102550
← PrevPage 41 of 73Next →

No leaderboard results yet.