SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 16511700 of 2177 papers

TitleStatusHype
Robustness through Data Augmentation Loss ConsistencyCode0
Single-Modal Entropy based Active Learning for Visual Question Answering0
Towards Language-guided Visual Recognition via Dynamic ConvolutionsCode0
xGQA: Cross-Lingual Visual Question Answering0
MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
Asking questions on handwritten document collections0
Breaking Down Questions for Outside-Knowledge VQA0
Variational Disentangled Attention for Regularized Visual Dialog0
Crossformer: Transformer with Alternated Cross-Layer Guidance0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering0
Multimodal Integration of Human-Like Attention in Visual Question Answering0
How to find a good image-text embedding for remote sensing visual question answering?0
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation0
TxT: Crossmodal End-to-End Learning with Transformers0
Improved RAMEN: Towards Domain Generalization for Visual Question AnsweringCode0
Weakly Supervised Relative Spatial Reasoning for Visual Question AnsweringCode0
On the Significance of Question Encoder Sequence Model in the Out-of-Distribution Performance in Visual Question Answering0
Auto-Parsing Network for Image Captioning and Visual Question Answering0
Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling0
VALSE: A Task-Independent Benchmark for Vision and Language Models centered on Linguistic Phenomena0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
LRRA:A Transparent Neural-Symbolic Reasoning Framework for Real-World Visual Question Answering0
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering0
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)0
Towards Visual Question Answering on Pathology ImagesCode0
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question AnsweringCode0
MuVAM: A Multi-View Attention-based Model for Medical Visual Question Answering0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
Multimodal Few-Shot Learning with Frozen Language Models0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
A Picture May Be Worth a Hundred Words for Visual Question Answering0
VQA-Aid: Visual Question Answering for Post-Disaster Damage Assessment and Analysis0
How Modular Should Neural Module Networks Be for Systematic Generalization?Code0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
Bayesian Attention Belief Networks0
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions0
PAM: Understanding Product Images in Cross Product Category Attribute Extraction0
Human-Adversarial Visual Question Answering0
Grounding Complex Navigational Instructions Using Scene Graphs0
MIMOQA: Multimodal Input Multimodal Output Question Answering0
Semantic Aligned Multi-modal Transformer for Vision-LanguageUnderstanding: A Preliminary Study on Visual QA0
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models0
CLEVR\_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over ImagesCode0
Show:102550
← PrevPage 34 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified