SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 476500 of 2167 papers

TitleStatusHype
Spatially Aware Multimodal Transformers for TextVQACode1
An Empirical Study of Multimodal Model MergingCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
FiLM: Visual Reasoning with a General Conditioning LayerCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question AnsweringCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Distilled Dual-Encoder Model for Vision-Language UnderstandingCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
3DMIT: 3D Multi-modal Instruction Tuning for Scene UnderstandingCode1
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness RecognitionCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
TAP: Text-Aware Pre-training for Text-VQA and Text-CaptionCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image UnderstandingCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
DocVQA: A Dataset for VQA on Document ImagesCode1
Think Locally, Act Globally: Federated Learning with Local and Global RepresentationsCode1
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQACode1
Show:102550
← PrevPage 20 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified