SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 13011325 of 2177 papers

TitleStatusHype
Attribute Diversity Determines the Systematicity Gap in VQACode0
Asking More Informative Questions for Grounded Retrieval0
What Large Language Models Bring to Text-rich VQA?0
Visual Commonsense based Heterogeneous Graph Contrastive Learning0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding0
From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities0
VQA-GEN: A Visual Question Answering Benchmark for Domain Generalization0
A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis0
Learning to Follow Object-Centric Image Editing Instructions FaithfullyCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in VietnameseCode0
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation0
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering PairsCode0
CAD -- Contextual Multi-modal Alignment for Dynamic AVQA0
Exploring Question Decomposition for Zero-Shot VQA0
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents0
Multimodal Representations for Teacher-Guided Compositional Visual Reasoning0
Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond0
LXMERT Model Compression for Visual Question AnsweringCode0
SILC: Improving Vision Language Pretraining with Self-Distillation0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question AnsweringCode0
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
Show:102550
← PrevPage 53 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified