SOTAVerified

Hallucination

Papers

Showing 2650 of 1816 papers

TitleStatusHype
Halu-J: Critique-Based Hallucination JudgeCode4
The All-Seeing Project V2: Towards General Relation Comprehension of the Open WorldCode4
Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-CollaborationCode4
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question AnsweringCode4
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Retrieval Head Mechanistically Explains Long-Context FactualityCode3
RefChecker: Reference-based Fine-grained Hallucination Checker and Benchmark for Large Language ModelsCode3
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and RefinementCode3
RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language ProcessingCode3
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative ModelsCode3
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon GenerationCode3
Evaluating Hallucinations in Chinese Large Language ModelsCode3
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth FusionCode3
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language ModelsCode3
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language ModelsCode3
AutoHallusion: Automatic Generation of Hallucination Benchmarks for Vision-Language ModelsCode3
Learning Dynamics of LLM FinetuningCode3
Automated Hypothesis Validation with Agentic Sequential FalsificationsCode3
Embodied Agent Interface: Benchmarking LLMs for Embodied Decision MakingCode3
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge DistillationCode3
Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language ModelsCode3
Show:102550
← PrevPage 2 of 73Next →

No leaderboard results yet.