SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 16511700 of 2167 papers

TitleStatusHype
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document UnderstandingCode0
Object-Centric Diagnosis of Visual Reasoning0
Learning content and context with language bias for Visual Question AnsweringCode0
KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain Knowledge-Based VQA0
On Modality Bias in the TVQA DatasetCode0
Trying Bilinear Pooling in Video-QA0
KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps0
Study on the Assessment of the Quality of Experience of Streaming VideoCode0
Understanding Guided Image Captioning Performance across DomainsCode0
WeaQA: Weak Supervision via Captions for Visual Question Answering0
Multimodal Graph Networks for Compositional Generalization in Visual Question Answering0
Open-Ended Multi-Modal Relational Reasoning for Video Question AnsweringCode0
A Unified Framework for Multilingual and Code-Mixed Visual Question Answering0
Towards Knowledge-Augmented Visual Question AnsweringCode0
Learning from Lexical Perturbations for Consistent Visual Question AnsweringCode0
Siamese Tracking with Lingual Object ConstraintsCode0
Interpretable Visual Reasoning via Induced Symbolic SpaceCode0
Modular Graph Attention Network for Complex Visual Relational Reasoning0
Logically Consistent Loss for Visual Question Answering0
Generating Natural Questions from Images for Multimodal Assistants0
CapWAP: Captioning with a Purpose0
Learning to Model and Ignore Dataset Bias with Mixed Capacity EnsemblesCode0
An Improved Attention for Visual Question AnsweringCode0
Reasoning Over History: Context Aware Visual Dialog0
Can Pre-training help VQA with Lexical Variations?0
Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks0
STL-CQA: Structure-based Transformers with Localization and Encoding for Chart Question Answering0
CapWAP: Image Captioning with a Purpose0
ISAAQ - Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a Class-imbalance ViewCode0
Leveraging Visual Question Answering to Improve Text-to-Image Synthesis0
Beyond VQA: Generating Multi-word Answer and Rationale to Visual Questions0
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
Answer-checking in Context: A Multi-modal FullyAttention Network for Visual Question Answering0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Does my multimodal model learn cross-modal interactions? It's harder to tell than you might think!0
Interpretable Neural Computation for Real-World Compositional Visual Question Answering0
Characterizing Datasets for Social Visual Question Answering, and the New TinySocial Dataset0
Finding the Evidence: Localization-aware Answer Prediction for Text Visual Question Answering0
Pathological Visual Question Answering0
Attention Guided Semantic Relationship Parsing for Visual Question Answering0
CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns0
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network0
Spatial Attention as an Interface for Image Captioning Models0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Regularizing Attention Networks for Anomaly Detection in Visual Question Answering0
A Multimodal Memes Classification: A Survey and Open Research Issues0
Show:102550
← PrevPage 34 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified