SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 501550 of 2167 papers

TitleStatusHype
Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide ImagesCode1
TRAR: Routing the Attention Spans in Transformer for Visual Question AnsweringCode1
Detecting Hate Speech in Multi-modal MemesCode1
Overcoming Language Priors with Self-supervised Learning for Visual Question AnsweringCode1
Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation EmbeddingCode1
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractionsCode1
TAP: Text-Aware Pre-training for Text-VQA and Text-CaptionCode1
FloodNet: A High Resolution Aerial Imagery Dataset for Post Flood Scene UnderstandingCode1
Just Ask: Learning to Answer Questions from Millions of Narrated VideosCode1
Point and Ask: Incorporating Pointing into Visual Question AnsweringCode1
Patch-VQ: 'Patching Up' the Video Quality ProblemCode1
Transformation Driven Visual ReasoningCode1
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-AttentionCode1
LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular Supervision for Visual Question AnsweringCode1
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
Learning to Contrast the Counterfactual Samples for Robust Visual Question AnsweringCode1
MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question AnsweringCode1
ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality PredictionCode1
RUArt: A Novel Text-Centered Solution for Text-Based Visual Question AnsweringCode1
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional EntropiesCode1
Bayesian Attention ModulesCode1
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense GraphsCode1
Contrast and Classify: Training Robust VQA ModelsCode1
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal TransformersCode1
MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question AnsweringCode1
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
A Dataset and Baselines for Visual Question Answering on ArtCode1
DeVLBert: Learning Deconfounded Visio-Linguistic RepresentationsCode1
Spatially Aware Multimodal Transformers for TextVQACode1
Semantic Equivalent Adversarial Data Augmentation for Visual Question AnsweringCode1
Knowledge-Based Video Question Answering with Unsupervised Scene DescriptionsCode1
Learning to Discretely Compose Reasoning Module Networks for Video CaptioningCode1
DocVQA: A Dataset for VQA on Document ImagesCode1
Visual Question Generation from Radiology ImagesCode1
Ontology-guided Semantic Composition for Zero-Shot LearningCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
Sparse and Continuous Attention MechanismsCode1
Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic ReasoningCode1
Large-Scale Adversarial Training for Vision-and-Language Representation LearningCode1
Roses Are Red, Violets Are Blue... but Should Vqa Expect Them To?Code1
Counterfactual VQA: A Cause-Effect Look at Language BiasCode1
Attention-Based Context Aware Reasoning for Situation RecognitionCode1
Structured Multimodal Attentions for TextVQACode1
UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated ContentCode1
Cross-Modality Relevance for Reasoning on Language and VisionCode1
COBRA: Contrastive Bi-Modal Representation AlgorithmCode1
Dynamic Language Binding in Relational Visual ReasoningCode1
Deep Multimodal Neural Architecture SearchCode1
Visual Grounding Methods for VQA are Working for the Wrong Reasons!Code1
Show:102550
← PrevPage 11 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified