SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 11011150 of 2167 papers

TitleStatusHype
Linearly Mapping from Image to Text SpaceCode1
TVLT: Textless Vision-Language TransformerCode1
RepsNet: Combining Vision with Language for Automated Medical Reports0
Towards Explainable 3D Grounded Visual Question Answering: A New Benchmark and Strong BaselineCode1
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
Continual VQA for Disaster Response SystemsCode0
Toward 3D Spatial Reasoning for Human-like Text-based Visual Question Answering0
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question AnsweringCode2
Panoramic Vision Transformer for Saliency Detection in 360° VideosCode1
Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar InstancesCode0
LAVIS: A Library for Language-Vision Intelligence0
OmniVL:One Foundation Model for Image-Language and Video-Language Tasks0
PaLI: A Jointly-Scaled Multilingual Language-Image Model0
Correlation Information Bottleneck: Towards Adapting Pretrained Multimodal Models for Robust Visual Question Answering0
MUST-VQA: MUltilingual Scene-text VQA0
PreSTU: Pre-Training for Scene-Text Understanding0
MaXM: Towards Multilingual Visual Question AnsweringCode1
Pre-training image-language transformers for open-vocabulary tasks0
Improving the Cross-Lingual Generalisation in Visual Question AnsweringCode0
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual ModelingCode1
2BiVQA: Double Bi-LSTM based Video Quality Assessment of UGC VideosCode1
Evaluating Point Cloud from Moving Camera Videos: A No-Reference MetricCode0
Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical AlignmentCode1
Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA TaskCode1
Bidirectional Contrastive Split Learning for Visual Question Answering0
FashionVQA: A Domain-Specific Visual Question Answering System0
How good are deep models in understanding the generated images?0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
VLMAE: Vision-Language Masked Autoencoder0
Understanding Attention for Vision-and-Language TasksCode0
ILLUME: Rationalizing Vision-Language Models through Human InteractionsCode0
Aesthetic Visual Question Answering of Photographs0
CLEVR-Math: A Dataset for Compositional Language, Visual and Mathematical ReasoningCode1
ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal UnderstandingCode1
Prompt Tuning for Generative Multimodal Pretrained Models0
TAG: Boosting Text-VQA via Text-aware Visual Question-answer GenerationCode1
NAPA: Intermediate-level Variational Native-pulse Ansatz for Variational Quantum Algorithms0
Generative Bias for Robust Visual Question AnsweringCode1
Video Question Answering with Iterative Video-Text Co-Tokenization0
Parameter-Parallel Distributed Variational Quantum Algorithm0
Uncertainty-based Visual Question Answering: Estimating Semantic Inconsistency between Image and Knowledge Base0
LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text InjectionCode1
Cross-Modal Causal Relational Reasoning for Event-Level Visual Question AnsweringCode1
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
Is GPT-3 all you need for Visual Question Answering in Cultural Heritage?0
Towards Complex Document Understanding By Discrete Reasoning0
Visual Perturbation-aware Collaborative Learning for Overcoming the Language Prior Problem0
Semantic-aware Modular Capsule Routing for Visual Question Answering0
Rethinking Data Augmentation for Robust Visual Question AnsweringCode1
Clover: Towards A Unified Video-Language Alignment and Fusion ModelCode1
Show:102550
← PrevPage 23 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified