SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 19512000 of 2167 papers

TitleStatusHype
Beyond Raw Videos: Understanding Edited Videos with Large Multimodal ModelCode0
A Question-Centric Model for Visual Question Answering in Medical ImagingCode0
Did the Model Understand the Question?Code0
Dialog without Dialog Data: Learning Visual Dialog Agents from VQA DataCode0
InstructOCR: Instruction Boosting Scene Text SpottingCode0
VinVL+L: Enriching Visual Representation with Location Context in VQACode0
QACE: Asking Questions to Evaluate an Image CaptionCode0
Inferring and Executing Programs for Visual ReasoningCode0
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering PairsCode0
QAVA: Query-Agnostic Visual Attack to Large Vision-Language ModelsCode0
Diagnosing and Mitigating Modality Interference in Multimodal Large Language ModelsCode0
Improving Zero-shot Visual Question Answering via Large Language Models with Reasoning Question PromptsCode0
Beyond Bilinear: Generalized Multimodal Factorized High-order Pooling for Visual Question AnsweringCode0
QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual ReasoningCode0
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode0
Tips and Tricks for Visual Question Answering: Learnings from the 2017 ChallengeCode0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
Quantifying and Alleviating the Language Prior Problem in Visual Question AnsweringCode0
Improving the Cross-Lingual Generalisation in Visual Question AnsweringCode0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
Query and Attention Augmentation for Knowledge-Based Explainable ReasoningCode0
VizWiz Grand Challenge: Answering Visual Questions from Blind PeopleCode0
Improved RAMEN: Towards Domain Generalization for Visual Question AnsweringCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason ObjectivesCode0
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding TasksCode0
Adaptive Score Alignment Learning for Continual Perceptual Quality Assessment of 360-Degree Videos in Virtual RealityCode0
Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical KnowledgeCode0
Applying recent advances in Visual Question Answering to Record LinkageCode0
Delving Deeper into Cross-lingual Visual Question AnsweringCode0
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous drivingCode0
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question AnsweringCode0
Towards a Unified Multimodal Reasoning FrameworkCode0
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person ViewCode0
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question AnsweringCode0
Implicit Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
Image Question Answering using Convolutional Neural Network with Dynamic Parameter PredictionCode0
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT AnalysisCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Answer Them All! Toward Universal Visual Question Answering ModelsCode0
Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQACode0
Benchmarking Multi-dimensional AIGC Video Quality Assessment: A Dataset and Unified ModelCode0
Bayesian Low-Rank LeArning (Bella): A Practical Approach to Bayesian Neural NetworksCode0
Bilaterally Slimmable Transformer for Elastic and Efficient Visual Question AnsweringCode0
ILLUME: Rationalizing Vision-Language Models through Human InteractionsCode0
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language ModelsCode0
Barlow constrained optimization for Visual Question AnsweringCode0
Show:102550
← PrevPage 40 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified