SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 576600 of 2177 papers

TitleStatusHype
MMPKUBase: A Comprehensive and High-quality Chinese Multi-modal Knowledge Graph0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated CapabilitiesCode3
SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Boosting Audio Visual Question Answering via Key Semantic-Aware CuesCode1
Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Tasks0
AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering0
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
VILA^2: VILA Augmented VILA0
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
Imperfect Vision Encoders: Efficient and Robust Tuning for Vision-Language Models0
Learning Trimodal Relation for AVQA with Missing ModalityCode1
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language ModelsCode0
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive DiversityCode2
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person ViewCode0
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack BenchmarkCode1
Multimodal Reranking for Knowledge-Intensive Visual Question Answering0
ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data0
EchoSight: Advancing Visual-Language Models with Wiki Knowledge0
TM-PATHVQA:90000+ Textless Multilingual Questions for Medical Visual Question Answering0
Show:102550
← PrevPage 24 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified