SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 11761200 of 2167 papers

TitleStatusHype
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues0
Learning Rich Image Region Representation for Visual Question Answering0
Learning Sparse Mixture of Experts for Visual Question Answering0
Learning to Answer Multilingual and Code-Mixed Questions0
Learning to Answer Questions From Image Using Convolutional Neural Network0
Learning to Collocate Neural Modules for Image Captioning0
Learning to Compose Diversified Prompts for Image Emotion Classification0
Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering0
Learning to Disambiguate by Asking Discriminative Questions0
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios0
Neural Reasoning, Fast and Slow, for Video Question Answering0
Learning to Recognize the Unseen Visual Predicates0
Learning to Select Question-Relevant Relations for Visual Question Answering0
Learning to Specialize with Knowledge Distillation for Visual Question Answering0
Learning Visual Knowledge Memory Networks for Visual Question Answering0
Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model0
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation0
Leveraging Medical Visual Question Answering with Supporting Facts0
Leveraging Video Descriptions to Learn Video Question Answering0
Leveraging Visual Question Answering for Image-Caption Ranking0
Leveraging Visual Question Answering to Improve Text-to-Image Synthesis0
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Show:102550
← PrevPage 48 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified