SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 12511300 of 2167 papers

TitleStatusHype
FVQA 2.0: Introducing Adversarial Samples into Fact-based Visual Question Answering0
Logical Implications for Visual Question Answering ConsistencyCode0
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images0
MRET: Multi-resolution Transformer for Video Quality Assessment0
Polar-VQA: Visual Question Answering on Remote Sensed Ice sheet Imagery from Polar Region0
Vision-Language Models as Success Detectors0
MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling0
Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning0
Toward Unsupervised Realistic Visual Question Answering0
Interpretable Visual Question Answering Referring to Outside Knowledge0
Graph Neural Networks in Vision-Language Image Understanding: A Survey0
Knowledge-Based Counterfactual Queries for Visual Question Answering0
VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media ReasoningCode0
Audio-Visual Quality Assessment for User Generated Content: Database and Method0
VQA with Cascade of Self- and Co-Attention Blocks0
Medical visual question answering using joint self-supervised learning0
EVJVQA Challenge: Multilingual Visual Question Answering0
VinVL+L: Enriching Visual Representation with Location Context in VQACode0
Few-shot Multimodal Multitask Multilingual Learning0
Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning0
Bridge Damage Cause Estimation Using Multiple Images Based on Visual Question Answering0
Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
Is Multimodal Vision Supervision Beneficial to Language?Code0
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA ModelsCode0
Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Towards Models that Can See and Read0
Curriculum Script Distillation for Multilingual Visual Question Answering0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
PromptCap: Prompt-Guided Image Captioning for VQA with GPT-30
Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical KnowledgeCode0
Dynamic Inference With Grounding Based Vision and Language Models0
RMLVQA: A Margin Loss Approach for Visual Question Answering With Language Biases0
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering0
HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training0
VQA and Visual Reasoning: An Overview of Recent Datasets, Methods and Challenges0
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
UnICLAM:Contrastive Representation Learning with Adversarial Masking for Unified and Interpretable Medical Vision Question Answering0
DePlot: One-shot visual language reasoning by plot-to-table translation0
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?0
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
CLIPPO: Image-and-Language Understanding from Pixels Only0
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge MemoryCode0
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners0
Review of Ansatz Designing Techniques for Variational Quantum Algorithms0
ParsVQA-Caps: A Benchmark for Visual Question Answering and Image Captioning in Persian0
Show:102550
← PrevPage 26 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified