SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17011750 of 2177 papers

TitleStatusHype
Multimodal Graph Networks for Compositional Generalization in Visual Question Answering0
Point and Ask: Incorporating Pointing into Visual Question AnsweringCode1
Learning from Lexical Perturbations for Consistent Visual Question AnsweringCode0
Siamese Tracking with Lingual Object ConstraintsCode0
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-AttentionCode1
Modular Graph Attention Network for Complex Visual Relational Reasoning0
LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular Supervision for Visual Question AnsweringCode1
Logically Consistent Loss for Visual Question Answering0
Generating Natural Questions from Images for Multimodal Assistants0
CapWAP: Captioning with a Purpose0
Learning to Model and Ignore Dataset Bias with Mixed Capacity EnsemblesCode0
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
An Improved Attention for Visual Question AnsweringCode0
Reasoning Over History: Context Aware Visual Dialog0
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks0
Can Pre-training help VQA with Lexical Variations?0
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
CapWAP: Image Captioning with a Purpose0
ISAAQ - Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Learning to Contrast the Counterfactual Samples for Robust Visual Question AnsweringCode1
Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a Class-imbalance ViewCode0
Leveraging Visual Question Answering to Improve Text-to-Image Synthesis0
MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question AnsweringCode1
RUArt: A Novel Text-Centered Solution for Text-Based Visual Question AnsweringCode1
Beyond VQA: Generating Multi-word Answer and Rationale to Visual Questions0
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional EntropiesCode1
Bayesian Attention ModulesCode1
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
Answer-checking in Context: A Multi-modal FullyAttention Network for Visual Question Answering0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense GraphsCode1
Does my multimodal model learn cross-modal interactions? It's harder to tell than you might think!0
Contrast and Classify: Training Robust VQA ModelsCode1
Interpretable Neural Computation for Real-World Compositional Visual Question Answering0
Characterizing Datasets for Social Visual Question Answering, and the New TinySocial Dataset0
Pathological Visual Question Answering0
Finding the Evidence: Localization-aware Answer Prediction for Text Visual Question Answering0
Attention Guided Semantic Relationship Parsing for Visual Question Answering0
CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns0
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network0
Spatial Attention as an Interface for Image Captioning Models0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal TransformersCode1
Regularizing Attention Networks for Anomaly Detection in Visual Question Answering0
MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question AnsweringCode1
A Multimodal Memes Classification: A Survey and Open Research Issues0
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
Cross-modal Knowledge Reasoning for Knowledge-based Visual Question Answering0
Show:102550
← PrevPage 35 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified