SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 13511400 of 2167 papers

TitleStatusHype
ICDAR 2021 Competition on Document VisualQuestion Answering0
Visual Question Answering based on Formal Logic0
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-ExpertsCode1
ViVQA: Vietnamese Visual Question AnsweringCode1
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization0
Diversity and Consistency: Exploring Visual Question-Answer Pair Generation0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Introspective Distillation for Robust Question AnsweringCode1
Subtleties in the trainability of quantum machine learning models0
Perceptual Score: What Data Modalities Does Your Model Perceive?Code0
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language ReasoningCode1
Alignment Attention by Matching Key and Query DistributionsCode0
Single-Modal Entropy based Active Learning for Visual Question Answering0
Robustness through Data Augmentation Loss ConsistencyCode0
Evaluating and Improving Interactions with Hazy Oracles0
Label-Descriptive Patterns and Their Application to Characterizing Classification ErrorsCode1
Towards Language-guided Visual Recognition via Dynamic ConvolutionsCode0
xGQA: Cross-Lingual Visual Question Answering0
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language ModelsCode1
Explore before Moving: A Feasible Path Estimation and Memory Recalling Framework for Embodied Navigation0
Guiding Visual Question Generation0
Semantically Distributed Robust Optimization for Vision-and-Language InferenceCode0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants0
Pano-AVQA: Grounded Audio-Visual Question Answering on 360^ VideosCode1
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
Coarse-to-Fine Reasoning for Visual Question AnsweringCode1
Counterfactual Samples Synthesizing and Training for Robust Visual Question AnsweringCode1
ProTo: Program-Guided Transformer for Program-Guided TasksCode1
Asking questions on handwritten document collections0
The Spoon Is in the Sink: Assisting Visually Impaired People in the KitchenCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
Breaking Down Questions for Outside-Knowledge VQA0
PRNet: A Progressive Regression Network for No-Reference User-Generated-Content Video Quality Assessment0
Variational Disentangled Attention for Regularized Visual Dialog0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
Crossformer: Transformer with Alternated Cross-Layer Guidance0
High Frame Rate Video Quality Assessment using VMAF and Entropic Differences0
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering0
Multimodal Integration of Human-Like Attention in Visual Question Answering0
How to find a good image-text embedding for remote sensing visual question answering?0
Does Vision-and-Language Pretraining Improve Lexical Grounding?Code1
ChipQA: No-Reference Video Quality Prediction via Space-Time ChipsCode1
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
xGQA: Cross-Lingual Visual Question AnsweringCode1
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation0
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
Show:102550
← PrevPage 28 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified