SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 801825 of 2167 papers

TitleStatusHype
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
CP-LLM: Context and Pixel Aware Large Language Model for Video Quality Assessment0
Language Features Matter: Effective Language Representations for Vision-Language Tasks0
CQ-VQA: Visual Question Answering on Categorized Questions0
Compressing Visual-linguistic Model via Knowledge Distillation0
Grounded Word Sense Translation0
Knowledge Condensation and Reasoning for Knowledge-based VQA0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
HVS Revisited: A Comprehensive Video Quality Assessment Framework0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Graph-Structured Representations for Visual Question Answering0
Compound Tokens: Channel Fusion for Vision-Language Representation Learning0
Graph Relation Transformer: Incorporating pairwise object features into the Transformer architecture0
Bilinear Graph Networks for Visual Question Answering0
ICDAR 2021 Competition on Document VisualQuestion Answering0
Aligning MAGMA by Few-Shot Learning and Finetuning0
Graph Neural Networks in Vision-Language Image Understanding: A Survey0
Compositional Memory for Visual Question Answering0
Learning Compositional Representation for Few-shot Visual Question Answering0
Graph Edit Distance Reward: Learning to Edit Scene Graph0
A survey on VQA_Datasets and Approaches0
A survey on knowledge-enhanced multimodal learning0
Show:102550
← PrevPage 33 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified