SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 476500 of 2167 papers

TitleStatusHype
NExT-QA:Next Phase of Question-Answering to Explaining Temporal ActionsCode1
Found a Reason for me? Weakly-supervised Grounded Visual Question Answering using CapsulesCode1
Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic ReasoningCode1
Passage Retrieval for Outside-Knowledge Visual Question AnsweringCode1
MDETR -- Modulated Detection for End-to-End Multi-Modal UnderstandingCode1
RelTransformer: A Transformer-Based Long-Tail Visual Relationship RecognitionCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question AnsweringCode1
MMBERT: Multimodal BERT Pretraining for Improved Medical VQACode1
VisQA: X-raying Vision and Language Reasoning in TransformersCode1
Towards General Purpose Vision SystemsCode1
Are Bias Mitigation Techniques for Deep Learning Effective?Code1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
SUTD-TrafficQA: A Question Answering Benchmark and an Efficient Network for Video Reasoning over Traffic EventsCode1
On the hidden treasure of dialog in video question answeringCode1
Multi-Modal Answer Validation for Knowledge-Based VQACode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical Visual Question AnsweringCode1
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual ConceptsCode1
Less is More: ClipBERT for Video-and-Language Learning via Sparse SamplingCode1
ViLT: Vision-and-Language Transformer Without Convolution or Region SupervisionCode1
Unifying Vision-and-Language Tasks via Text GenerationCode1
VisualMRC: Machine Reading Comprehension on Document ImagesCode1
Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide ImagesCode1
TRAR: Routing the Attention Spans in Transformer for Visual Question AnsweringCode1
Show:102550
← PrevPage 20 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified