SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 10011050 of 2167 papers

TitleStatusHype
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Champion Solution for the WSDM2023 Toloka VQA ChallengeCode3
Towards Models that Can See and Read0
Curriculum Script Distillation for Multilingual Visual Question Answering0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
SlideVQA: A Dataset for Document Visual Question Answering on Multiple ImagesCode1
Multimodal Inverse Cloze Task for Knowledge-based Visual Question AnsweringCode1
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
PromptCap: Prompt-Guided Image Captioning for VQA with GPT-30
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical KnowledgeCode0
Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering0
RMLVQA: A Margin Loss Approach for Visual Question Answering With Language Biases0
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
VQACL: A Novel Visual Question Answering Continual Learning SettingCode1
Dynamic Inference With Grounding Based Vision and Language Models0
HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training0
VQA and Visual Reasoning: An Overview of Recent Datasets, Methods and Challenges0
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
UnICLAM:Contrastive Representation Learning with Adversarial Masking for Unified and Interpretable Medical Vision Question Answering0
DePlot: One-shot visual language reasoning by plot-to-table translation0
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?0
MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form Video Question AnsweringCode1
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
CLIPPO: Image-and-Language Understanding from Pixels Only0
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge MemoryCode0
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners0
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
ParsVQA-Caps: A Benchmark for Visual Question Answering and Image Captioning in Persian0
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Review of Ansatz Designing Techniques for Variational Quantum Algorithms0
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
Unifying Vision, Text, and Layout for Universal Document ProcessingCode3
Visual Question Answering From Another Perspective: CLEVR Mental Rotation TestsCode0
Compound Tokens: Channel Fusion for Vision-Language Representation Learning0
Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual ReasoningCode1
Semi-supervised Learning of Perceptual Video Quality by Generating Consistent Pairwise Pseudo-Ranks0
Optimizing Explanations by Network Canonization and Hyperparameter Search0
PiggyBack: Pretrained Visual Question Answering Environment for Backing up Non-deep Learning Professionals0
Neuro-Symbolic Spatio-Temporal Reasoning0
Seeing What You Miss: Vision-Language Pre-training with Semantic Completion LearningCode1
Self-supervised vision-language pretraining for Medical visual question answeringCode1
Look, Read and Ask: Learning to Ask Questions by Reading Text in Images0
A Short Survey of Systematic Generalization0
X^2-VLM: All-In-One Pre-trained Model For Vision-Language TasksCode2
Cross-Modal Contrastive Learning for Robust Reasoning in VQACode0
Expectation-Maximization Contrastive Learning for Compact Video-and-Language RepresentationsCode1
Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference0
Show:102550
← PrevPage 21 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified