SOTAVerified

Hallucination

Papers

Showing 676700 of 1816 papers

TitleStatusHype
IterGen: Iterative Semantic-aware Structured LLM Generation with BacktrackingCode1
From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models0
Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction TuningCode0
Embodied Agent Interface: Benchmarking LLMs for Embodied Decision MakingCode3
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment0
ReFIR: Grounding Large Restoration Models with Retrieval AugmentationCode2
Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation0
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models0
FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning0
Differential TransformerCode2
TLDR: Token-Level Detective Reward Model for Large Vision Language Models0
AI-Enhanced Ethical Hacking: A Linux-Focused Experiment0
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support0
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
SAG: Style-Aligned Article Generation via Model Collaboration0
Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval Augmented Generation0
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language ModelsCode2
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs0
Characterizing Context Influence and Hallucination in SummarizationCode0
CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text GenerationCode1
Salient Information Prompting to Steer Content in Prompt-based Abstractive SummarizationCode0
Show:102550
← PrevPage 28 of 73Next →

No leaderboard results yet.