SOTAVerified

Hallucination

Papers

Showing 701750 of 1816 papers

TitleStatusHype
Evolutionary thoughts: integration of large language models and evolutionary algorithmsCode0
Incorporating Task-specific Concept Knowledge into Script LearningCode0
Instruction Makes a DifferenceCode0
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) ModelsCode0
Image Denoising with Control over Deep Network HallucinationCode0
Improving Factual Error Correction by Learning to Inject Factual ErrorsCode0
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and RectificationCode0
A Unified Hallucination Mitigation Framework for Large Vision-Language ModelsCode0
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language ModelsCode0
Im2Flow: Motion Hallucination from Static Images for Action RecognitionCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
Crafting In-context Examples according to LMs' Parametric KnowledgeCode0
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language ModelsCode0
Abstract Meaning Representation for Hospital Discharge SummarizationCode0
How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?Code0
CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language ModelsCode0
HaRiM^+: Evaluating Summary Quality with Hallucination RiskCode0
Are Large Language Models Good at Utility Judgments?Code0
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language ModelsCode0
Handling Ontology Gaps in Semantic ParsingCode0
Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical SupervisionCode0
ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language ModelsCode0
Handwritten Code Recognition for Pen-and-Paper CS EducationCode0
HELPD: Mitigating Hallucination of LVLMs by Hierarchical Feedback Learning with Vision-enhanced Penalty DecodingCode0
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMsCode0
Catch Me if You Search: When Contextual Web Search Results Affect the Detection of HallucinationsCode0
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination DetectionCode0
Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM ResponsesCode0
Entity-driven Fact-aware Abstractive Summarization of Biomedical LiteratureCode0
HALOS: Hallucination-free Organ Segmentation after Organ Resection SurgeryCode0
Careless Whisper: Speech-to-Text Hallucination HarmsCode0
Regression is all you need for medical image translationCode0
HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMsCode0
HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision MakingCode0
HaluEval-Wild: Evaluating Hallucinations of Language Models in the WildCode0
Hallucination Reduction in Long Input Text SummarizationCode0
HalluciNet-ing Spatiotemporal Representations Using a 2D-CNNCode0
Hallucination, Monofacts, and Miscalibration: An Empirical InvestigationCode0
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree PropagationCode0
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language ModelsCode0
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language ModelsCode0
Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch ReasoningCode0
Hallucination In Object Detection -- A Study In Visual Part VerificationCode0
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned ModelsCode0
Hallucination Mitigation Prompts Long-term Video UnderstandingCode0
Hallucination Elimination and Semantic Enhancement Framework for Vision-Language Models in Traffic ScenariosCode0
Hallucination Mitigation using Agentic AI Natural Language-Based FrameworksCode0
HalluDial: A Large-Scale Benchmark for Automatic Dialogue-Level Hallucination EvaluationCode0
Show:102550
← PrevPage 15 of 37Next →

No leaderboard results yet.