SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17261750 of 2177 papers

TitleStatusHype
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional EntropiesCode1
Bayesian Attention ModulesCode1
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
Answer-checking in Context: A Multi-modal FullyAttention Network for Visual Question Answering0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense GraphsCode1
Does my multimodal model learn cross-modal interactions? It's harder to tell than you might think!0
Contrast and Classify: Training Robust VQA ModelsCode1
Interpretable Neural Computation for Real-World Compositional Visual Question Answering0
Characterizing Datasets for Social Visual Question Answering, and the New TinySocial Dataset0
Pathological Visual Question Answering0
Finding the Evidence: Localization-aware Answer Prediction for Text Visual Question Answering0
Attention Guided Semantic Relationship Parsing for Visual Question Answering0
CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns0
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network0
Spatial Attention as an Interface for Image Captioning Models0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal TransformersCode1
Regularizing Attention Networks for Anomaly Detection in Visual Question Answering0
MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question AnsweringCode1
A Multimodal Memes Classification: A Survey and Open Research Issues0
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
Cross-modal Knowledge Reasoning for Knowledge-based Visual Question Answering0
Show:102550
← PrevPage 70 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified