SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 301350 of 2167 papers

TitleStatusHype
Language-Informed Visual Concept LearningCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQACode1
LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text InjectionCode1
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairsCode1
Learning Situation Hyper-Graphs for Video Question AnsweringCode1
Change Detection Meets Visual Question AnsweringCode1
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language ModelsCode1
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
Distilled Dual-Encoder Model for Vision-Language UnderstandingCode1
DocFormerv2: Local Features for Document UnderstandingCode1
Check It Again: Progressive Visual Question Answering via Visual EntailmentCode1
Check It Again:Progressive Visual Question Answering via Visual EntailmentCode1
ChipQA: No-Reference Video Quality Prediction via Space-Time ChipsCode1
ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal UnderstandingCode1
DocVQA: A Dataset for VQA on Document ImagesCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Detecting Hate Speech in Multi-modal MemesCode1
Dual-Key Multimodal Backdoors for Visual Question AnsweringCode1
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual ReasoningCode1
KVQ: Boosting Video Quality Assessment via Saliency-guided Local PerceptionCode1
KAT: A Knowledge Augmented Transformer for Vision-and-LanguageCode1
CLEVR-Math: A Dataset for Compositional Language, Visual and Mathematical ReasoningCode1
ActiView: Evaluating Active Perception Ability for Multimodal Large Language ModelsCode1
Light-VQA: A Multi-Dimensional Quality Assessment Model for Low-Light Video EnhancementCode1
Deep Multimodal Neural Architecture SearchCode1
AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMMCode1
Knowledge-Based Video Question Answering with Unsupervised Scene DescriptionsCode1
JDocQA: Japanese Document Question Answering Dataset for Generative Language ModelsCode1
DeVLBert: Learning Deconfounded Visio-Linguistic RepresentationsCode1
Just Ask: Learning to Answer Questions from Millions of Narrated VideosCode1
Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation EmbeddingCode1
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Clover: Towards A Unified Video-Language Alignment and Fusion ModelCode1
Coarse-to-Fine Reasoning for Visual Question AnsweringCode1
Coarse-to-Fine Vision-Language Pre-training with Fusion in the BackboneCode1
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
End-to-end Document Recognition and Understanding with DessurtCode1
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic SurgeryCode1
Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical AlignmentCode1
COBRA: Contrastive Bi-Modal Representation AlgorithmCode1
CoCa: Contrastive Captioners are Image-Text Foundation ModelsCode1
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot PromptingCode1
Debiased Visual Question Answering from Feature and Sample PerspectivesCode1
Show:102550
← PrevPage 7 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified