SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 13011350 of 2177 papers

TitleStatusHype
Attribute Diversity Determines the Systematicity Gap in VQACode0
Asking More Informative Questions for Grounded Retrieval0
What Large Language Models Bring to Text-rich VQA?0
Visual Commonsense based Heterogeneous Graph Contrastive Learning0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding0
From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities0
VQA-GEN: A Visual Question Answering Benchmark for Domain Generalization0
A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis0
Learning to Follow Object-Centric Image Editing Instructions FaithfullyCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in VietnameseCode0
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation0
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering PairsCode0
CAD -- Contextual Multi-modal Alignment for Dynamic AVQA0
Exploring Question Decomposition for Zero-Shot VQA0
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents0
Multimodal Representations for Teacher-Guided Compositional Visual Reasoning0
Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond0
LXMERT Model Compression for Visual Question AnsweringCode0
SILC: Improving Vision Language Pretraining with Self-Distillation0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question AnsweringCode0
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
Enhancing BERT-Based Visual Question Answering through Keyword-Driven Sentence Selection0
Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning0
Open-Set Knowledge-Based Visual Question Answering with Inference PathsCode0
Improving mitosis detection on histopathology images using large vision-language models0
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded DialogCode0
Jaeger: A Concatenation-Based Multi-Transformer VQA Model0
Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 20230
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Causal Reasoning through Two Layers of Cognition for Improving Generalization in Visual Question Answering0
Lightweight In-Context Tuning for Multimodal Unified Models0
Improving Automatic VQA Evaluation Using Large Language Models0
On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study0
SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering0
Human Mobility Question Answering (Vision Paper)0
Tackling VQA with Pretrained Foundation Models without Further Training0
KOSMOS-2.5: A Multimodal Literate Model0
Visual Question Answering in the Medical Domain0
Sentence Attention Blocks for Answer Grounding0
Syntax Tree Constrained Graph Network for Visual Question Answering0
D3: Data Diversity Design for Systematic Generalization in Visual Question AnsweringCode0
Rank2Tell: A Multimodal Driving Dataset for Joint Importance Ranking and Reasoning0
Interpretable Visual Question Answering via Reasoning Supervision0
Evaluation and Enhancement of Semantic Grounding in Large Vision-Language Models0
Physically Grounded Vision-Language Models for Robotic Manipulation0
Towards Addressing the Misalignment of Object Proposal Evaluation for Vision-Language Tasks via Semantic GroundingCode0
Show:102550
← PrevPage 27 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified