SOTAVerified

Hallucination

Papers

Showing 15011550 of 1816 papers

TitleStatusHype
Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models0
Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models0
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and RectificationCode0
Predicting Text Preference Via Structured Comparative Reasoning0
Insights into Classifying and Mitigating LLMs' Hallucinations0
GPT-4V(ision) as A Social Media Analysis Engine0
Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation ModelsCode0
Hallucination Augmented Recitations for Language Models0
Hallucination-minimized Data-to-answer Framework for Financial Decision-makers0
CBSiMT: Mitigating Hallucination in Simultaneous Machine Translation with Weighted Prefix-to-Prefix Training0
ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models0
Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism0
Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward AlignmentCode0
Synthetic Imitation Edit Feedback for Factual Alignment in Clinical SummarizationCode0
N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics0
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation0
Virtual Accessory Try-On via Keypoint Hallucination0
Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text GenerationCode0
Learned, uncertainty-driven adaptive acquisition for photon-efficient scanning microscopy0
Correction with Backtracking Reduces Hallucination in SummarizationCode0
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text GenerationCode0
Language Models Hallucinate, but May Excel at Fact VerificationCode0
Unleashing the potential of prompt engineering for large language models0
Hallucination Detection for Grounded Instruction Generation0
Chainpoll: A high efficacy method for LLM hallucination detectionCode0
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models0
MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language ModelsCode0
Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy Searcher0
Reliable Academic Conference Question Answering: A Study Based on Large Language ModelCode0
ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks0
Flow Dynamics Correction for Action Recognition0
Towards reducing hallucination in extracting information from financial reports using Large Language Models0
Metric Ensembles For Hallucination Detection0
Assessing the Reliability of Large Language Model KnowledgeCode0
Configuration Validation with Large Language Models0
GameGPT: Multi-agent Collaborative Framework for Game Development0
GraphextQA: A Benchmark for Evaluating Graph-Enhanced Large Language ModelsCode0
A New Benchmark and Reverse Validation Method for Passage-level Hallucination DetectionCode0
Towards Mitigating Hallucination in Large Language Models via Self-Reflection0
Teaching Language Models to Hallucinate Less with Synthetic Tasks0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations0
Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning0
AutoHall: Automated Hallucination Dataset Generation for Large Language Models0
Self-Specialization: Uncovering Latent Expertise within Large Language Models0
Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving0
Hallucination Reduction in Long Input Text SummarizationCode0
Augmenting LLMs with Knowledge: A survey on hallucination prevention0
Aligning Large Multimodal Models with Factually Augmented RLHF0
Chain-of-Verification Reduces Hallucination in Large Language ModelsCode0
Show:102550
← PrevPage 31 of 37Next →

No leaderboard results yet.