SOTAVerified

Hallucination

Papers

Showing 4150 of 1816 papers

TitleStatusHype
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth FusionCode3
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language ModelsCode3
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language ModelsCode3
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language ModelsCode3
AutoHallusion: Automatic Generation of Hallucination Benchmarks for Vision-Language ModelsCode3
Learning Dynamics of LLM FinetuningCode3
Automated Hypothesis Validation with Agentic Sequential FalsificationsCode3
Embodied Agent Interface: Benchmarking LLMs for Embodied Decision MakingCode3
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge DistillationCode3
Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language ModelsCode3
Show:102550
← PrevPage 5 of 182Next →

No leaderboard results yet.