SOTAVerified

Hallucination

Papers

Showing 51100 of 1816 papers

TitleStatusHype
Towards An End-to-End Framework for Flow-Guided Video InpaintingCode3
LLMDFA: Analyzing Dataflow in Code with Large Language ModelsCode3
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language ModelsCode3
Evaluating Hallucinations in Chinese Large Language ModelsCode3
Automated Hypothesis Validation with Agentic Sequential FalsificationsCode3
RAGEval: Scenario Specific RAG Evaluation Dataset Generation FrameworkCode3
RefChecker: Reference-based Fine-grained Hallucination Checker and Benchmark for Large Language ModelsCode3
Retrieval Head Mechanistically Explains Long-Context FactualityCode3
RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language ProcessingCode3
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative ModelsCode3
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon GenerationCode3
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language ModelsCode3
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
Embodied Agent Interface: Benchmarking LLMs for Embodied Decision MakingCode3
Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language ModelsCode3
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
Mitigating Object Hallucination via Concentric Causal AttentionCode2
Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local AttentionCode2
MLLM can see? Dynamic Correction Decoding for Hallucination MitigationCode2
Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language ModelsCode2
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive DecodingCode2
MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language ModelsCode2
A Diffusion-Based Generative Equalizer for Music RestorationCode2
Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the KeyCode2
Medical Hallucinations in Foundation Models and Their Impact on HealthcareCode2
mDPO: Conditional Preference Optimization for Multimodal Large Language ModelsCode2
MeMemo: On-device Retrieval Augmentation for Private and Personalized Text GenerationCode2
Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention CausalityCode2
MMICL: Empowering Vision-language Model with Multi-Modal In-Context LearningCode2
3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less HallucinationCode2
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention MapsCode2
Controllable and Reliable Knowledge-Intensive Task-Oriented Conversational Agents with Declarative Genie WorksheetsCode2
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language ModelsCode2
Less is More: Mitigating Multimodal Hallucination from an EOS Decision PerspectiveCode2
ANAH: Analytical Annotation of Hallucinations in Large Language ModelsCode2
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual CheckingCode2
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step QuestionsCode2
Lawyer LLaMA Technical ReportCode2
LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and DistillationCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMsCode2
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language ModelsCode2
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language ModelsCode2
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine TranslationCode2
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast DecodingCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
Knowledge Graph-Guided Retrieval Augmented GenerationCode2
Image Textualization: An Automatic Framework for Creating Accurate and Detailed Image DescriptionsCode2
Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question AnsweringCode2
Show:102550
← PrevPage 2 of 37Next →

No leaderboard results yet.