SOTAVerified

Hallucination

Papers

Showing 15011525 of 1816 papers

TitleStatusHype
Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models0
Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models0
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and RectificationCode0
Predicting Text Preference Via Structured Comparative Reasoning0
Insights into Classifying and Mitigating LLMs' Hallucinations0
GPT-4V(ision) as A Social Media Analysis Engine0
Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation ModelsCode0
Hallucination Augmented Recitations for Language Models0
Hallucination-minimized Data-to-answer Framework for Financial Decision-makers0
CBSiMT: Mitigating Hallucination in Simultaneous Machine Translation with Weighted Prefix-to-Prefix Training0
ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models0
Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism0
Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward AlignmentCode0
Synthetic Imitation Edit Feedback for Factual Alignment in Clinical SummarizationCode0
N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics0
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation0
Virtual Accessory Try-On via Keypoint Hallucination0
Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text GenerationCode0
Learned, uncertainty-driven adaptive acquisition for photon-efficient scanning microscopy0
Correction with Backtracking Reduces Hallucination in SummarizationCode0
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text GenerationCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
Unleashing the potential of prompt engineering for large language models0
Hallucination Detection for Grounded Instruction Generation0
Chainpoll: A high efficacy method for LLM hallucination detectionCode0
Show:102550
← PrevPage 61 of 73Next →

No leaderboard results yet.