SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 17761800 of 2167 papers

TitleStatusHype
Accuracy vs. Complexity: A Trade-off in Visual Question Answering Models0
Recommending Themes for Ad Creative Design via Visual-Linguistic RepresentationsCode0
Extending Class Activation Mapping Using Gaussian Receptive Field0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
Visual Question Answering on 360° Images0
Multi-Layer Content Interaction Through Quaternion Product For Visual Question Answering0
Cost Function Dependent Barren Plateaus in Shallow Parametrized Quantum Circuits0
Vision and Language: from Visual Perception to Content Creation0
Deep Exemplar Networks for VQA and VQG0
KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild0
Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing0
AI2D-RST: A multimodal corpus of 1000 primary school science diagrams0
Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks0
12-in-1: Multi-Task Vision and Language Representation LearningCode0
Deep Bayesian Active Learning for Multiple Correct Outputs0
RUBi: Reducing Unimodal Biases for Visual Question AnsweringCode0
TAB-VCR: Tags and Attributes based VCR BaselinesCode0
Assessing the Robustness of Visual Question Answering Models0
A Free Lunch in Generating Datasets: Building a VQG and VQA System with Attention and Humans in the Loop0
OptiBox: Breaking the Limits of Proposals for Visual Grounding0
Transfer Learning in Visual and Relational Reasoning0
Unsupervised Keyword Extraction for Full-sentence VQA0
Temporal Reasoning via Audio Question AnsweringCode0
Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA0
DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual DialogueCode0
Show:102550
← PrevPage 72 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified