SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 551600 of 2167 papers

TitleStatusHype
YouMakeup VQA Challenge: Towards Fine-grained Action Understanding in Domain-Specific VideosCode1
Evaluating Multimodal Representations on Visual Semantic Textual SimilarityCode1
Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal TransformersCode1
Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene TextCode1
X-Linear Attention Networks for Image CaptioningCode1
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAICode1
Counterfactual Samples Synthesizing for Robust Visual Question AnsweringCode1
PathVQA: 30000+ Questions for Medical Visual Question AnsweringCode1
Visual Commonsense R-CNNCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
Multimodal fusion of imaging and genomics for lung cancer recurrence predictionCode1
Break It Down: A Question Understanding BenchmarkCode1
Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual FeaturesCode1
In Defense of Grid Features for Visual Question AnsweringCode1
Think Locally, Act Globally: Federated Learning with Local and Global RepresentationsCode1
Overcoming Data Limitation in Medical Visual Question AnsweringCode1
UNITER: UNiversal Image-TExt Representation LearningCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
VL-BERT: Pre-training of Generic Visual-Linguistic RepresentationsCode1
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
VideoNavQA: Bridging the Gap between Visual and Embodied Question AnsweringCode1
VisualBERT: A Simple and Performant Baseline for Vision and LanguageCode1
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language TasksCode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
Scene Text Visual Question AnsweringCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question AnsweringCode1
Compositional Attention Networks for Machine ReasoningCode1
AI2-THOR: An Interactive 3D Environment for Visual AICode1
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environmentsCode1
FiLM: Visual Reasoning with a General Conditioning LayerCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
ParlAI: A Dialog Research Software PlatformCode1
Learning Cooperative Visual Dialog Agents with Deep Reinforcement LearningCode1
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual ReasoningCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
Stacked Attention Networks for Image Question AnsweringCode1
VQA: Visual Question AnsweringCode1
VisionThink: Smart and Efficient Vision Language Model via Reinforcement LearningCode0
MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM0
Evaluating Attribute Confusion in Fashion Text-to-Image Generation0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
Bridging Video Quality Scoring and Justification via Large Multimodal Models0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
FOCUS: Internal MLLM Representations for Efficient Fine-Grained Visual Question Answering0
GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning0
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified