SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 16761700 of 2177 papers

TitleStatusHype
VALSE: A Task-Independent Benchmark for Vision and Language Models centered on Linguistic Phenomena0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
LRRA:A Transparent Neural-Symbolic Reasoning Framework for Real-World Visual Question Answering0
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering0
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)0
Towards Visual Question Answering on Pathology ImagesCode0
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question AnsweringCode0
MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
Multimodal Few-Shot Learning with Frozen Language Models0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
A Picture May Be Worth a Hundred Words for Visual Question Answering0
VQA-Aid: Visual Question Answering for Post-Disaster Damage Assessment and Analysis0
How Modular Should Neural Module Networks Be for Systematic Generalization?Code0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
Bayesian Attention Belief Networks0
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions0
PAM: Understanding Product Images in Cross Product Category Attribute Extraction0
Human-Adversarial Visual Question Answering0
Grounding Complex Navigational Instructions Using Scene Graphs0
MIMOQA: Multimodal Input Multimodal Output Question Answering0
Semantic Aligned Multi-modal Transformer for Vision-LanguageUnderstanding: A Preliminary Study on Visual QA0
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models0
CLEVR\_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over ImagesCode0
Show:102550
← PrevPage 68 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified