SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 451475 of 2167 papers

TitleStatusHype
An Empirical Analysis on Spatial Reasoning Capabilities of Large Multimodal ModelsCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical AlignmentCode1
Learning to Discretely Compose Reasoning Module Networks for Video CaptioningCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Deep Multimodal Neural Architecture SearchCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Label-Descriptive Patterns and Their Application to Characterizing Classification ErrorsCode1
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity DatasetCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
Language-Informed Visual Concept LearningCode1
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Large-Scale Adversarial Training for Vision-and-Language Representation LearningCode1
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot PromptingCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Detecting Hate Speech in Multi-modal MemesCode1
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual ModelingCode1
MDETR -- Modulated Detection for End-to-End Multi-Modal UnderstandingCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question AnsweringCode1
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal ModelsCode1
LaTr: Layout-Aware Transformer for Scene-Text VQACode1
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
Show:102550
← PrevPage 19 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified