SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 701725 of 2167 papers

TitleStatusHype
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Neural Module NetworksCode0
Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQACode0
Contextual Dropout: An Efficient Sample-Dependent Dropout ModuleCode0
Alignment Attention by Matching Key and Query DistributionsCode0
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image RepresentationsCode0
Multimodal Residual Learning for Visual QACode0
Adaptive loose optimization for robust question answeringCode0
Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question AnsweringCode0
Composition Vision-Language Understanding via Segment and Depth Anything ModelCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
Compositionality as Lexical SymmetryCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Multimodal Hypothetical Summary for Retrieval-based Multi-image Question AnsweringCode0
Multi-Image Visual Question AnsweringCode0
Compact Trilinear Interaction for Visual Question AnsweringCode0
CommVQA: Situating Visual Question Answering in Communicative ContextsCode0
MQA: Answering the Question via Robotic ManipulationCode0
Modulating early visual processing by languageCode0
COLUMBUS: Evaluating COgnitive Lateral Understanding through Multiple-choice reBUSesCode0
Modeling Relationships in Referential Expressions with Compositional Modular NetworksCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
Show:102550
← PrevPage 29 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified