SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 15011550 of 2167 papers

TitleStatusHype
UFO: A UniFied TransfOrmer for Vision-Language Representation Learning0
Medical Visual Question Answering: A Survey0
Blind VQA on 360° Video via Progressively Learning from Pixels, Frames and VideoCode0
Achieving Human Parity on Visual Question Answering0
Co-VQA : Answering by Interactive Sub Question Sequence0
Language bias in Visual Question Answering: A Survey and Taxonomy0
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation0
Uncertainty-based Visual Question Answering: Estimating Semantic Inconsistency between Image and Knowledge Base0
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode0
Question-Led Semantic Structure Enhanced Attentions for VQA0
Breaking Down Questions for Outside-Knowledge Visual Question Answering0
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models0
Document AI: Benchmarks, Models and Applications0
No-Reference Video Quality Assessment Based on Benford’s Law and Perceptual FeaturesCode0
Graph Relation Transformer: Incorporating pairwise object features into the Transformer architecture0
ICDAR 2021 Competition on Document VisualQuestion Answering0
Visual Question Answering based on Formal Logic0
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization0
Diversity and Consistency: Exploring Visual Question-Answer Pair Generation0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Perceptual Score: What Data Modalities Does Your Model Perceive?Code0
Subtleties in the trainability of quantum machine learning models0
Alignment Attention by Matching Key and Query DistributionsCode0
Robustness through Data Augmentation Loss ConsistencyCode0
Single-Modal Entropy based Active Learning for Visual Question Answering0
Evaluating and Improving Interactions with Hazy Oracles0
Towards Language-guided Visual Recognition via Dynamic ConvolutionsCode0
Explore before Moving: A Feasible Path Estimation and Memory Recalling Framework for Embodied Navigation0
xGQA: Cross-Lingual Visual Question Answering0
Guiding Visual Question Generation0
Semantically Distributed Robust Optimization for Vision-and-Language InferenceCode0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
Asking questions on handwritten document collections0
Breaking Down Questions for Outside-Knowledge VQA0
PRNet: A Progressive Regression Network for No-Reference User-Generated-Content Video Quality Assessment0
Crossformer: Transformer with Alternated Cross-Layer Guidance0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Variational Disentangled Attention for Regularized Visual Dialog0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering0
High Frame Rate Video Quality Assessment using VMAF and Entropic Differences0
Multimodal Integration of Human-Like Attention in Visual Question Answering0
How to find a good image-text embedding for remote sensing visual question answering?0
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation0
TxT: Crossmodal End-to-End Learning with Transformers0
Improved RAMEN: Towards Domain Generalization for Visual Question AnsweringCode0
Show:102550
← PrevPage 31 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified