SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 12011250 of 2167 papers

TitleStatusHype
Making Video Quality Assessment Models Sensitive to Frame Rate Distortions0
Gender and Racial Bias in Visual Question Answering Datasets0
A Neuro-Symbolic ASP Pipeline for Visual Question AnsweringCode0
A Framework to Map VMAF with the Probability of Just Noticeable Difference between Video Encoding Recipes0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
Learning to Answer Visual Questions from Web VideosCode1
Joint learning of object graph and relation graph for visual question answering0
Deep Quality Assessment of Compressed Videos: A Subjective and Objective Study0
QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual ReasoningCode0
From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data0
What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task LearningCode0
LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum Neural NetworksCode0
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
All You May Need for VQA are Image CaptionsCode3
CoCa: Contrastive Captioners are Image-Text Foundation ModelsCode1
Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering0
Vision-Language Pretraining: Current Trends and the Future0
ViLMedic: a framework for research at the intersection of vision and language in medical AI0
DuReader_vis: A Chinese Dataset for Open-domain Document Visual Question Answering0
Bridging the Gap between Recognition-level Pre-training and Commonsensical Vision-language Tasks0
Flamingo: a Visual Language Model for Few-Shot LearningCode4
Reliable Visual Question Answering: Abstain Rather Than Answer IncorrectlyCode1
GRIT: General Robust Image Task BenchmarkCode1
RelViT: Concept-guided Vision Transformer for Visual Relational ReasoningCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks0
Attention in Reasoning: Dataset, Analysis, and ModelingCode1
LayoutLMv3: Pre-training for Document AI with Unified Text and Image MaskingCode0
Attention Mechanism based Cognition-level Scene Understanding0
Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning0
SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in Visual Question AnsweringCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
Question-Driven Graph Fusion Network For Visual Question Answering0
Co-VQA : Answering by Interactive Sub Question Sequence0
Perceptual Quality Assessment of UGC Gaming Videos0
SimVQA: Exploring Simulated Environments for Visual Question Answering0
VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language TransformersCode0
End-to-end Document Recognition and Understanding with DessurtCode1
Visual Mechanisms Inspired Efficient Transformers for Image and Video Quality Assessment0
Single-Stream Multi-Level Alignment for Vision-Language PretrainingCode0
Learning to Answer Questions in Dynamic Audio-Visual ScenariosCode1
Subjective and Objective Analysis of Streamed Gaming Videos0
Towards Escaping from Language Bias and OCR Error: Semantics-Centered Text Visual Question Answering0
Bilaterally Slimmable Transformer for Elastic and Efficient Visual Question AnsweringCode0
WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models0
MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question AnsweringCode1
Can you even tell left from right? Presenting a new challenge for VQA0
CARETS: A Consistency And Robustness Evaluative Test Suite for VQACode0
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment0
All in One: Exploring Unified Video-Language Pre-trainingCode2
Show:102550
← PrevPage 25 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified