SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 751800 of 2167 papers

TitleStatusHype
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
MUTAN: Multimodal Tucker Fusion for Visual Question AnsweringCode0
COLUMBUS: Evaluating COgnitive Lateral Understanding through Multiple-choice reBUSesCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Multi-Target Embodied Question AnsweringCode0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
Open-Ended Visual Question-AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Generalizing Visual Question Answering from Synthetic to Human-Written Questions via a Chain of QA with a Large Language ModelCode0
General Greedy De-bias LearningCode0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
Hallucination Benchmark in Medical Visual Question AnsweringCode0
Multimodal Residual Learning for Visual QACode0
Multiscale Byte Language Models -- A Hierarchical Architecture for Causal Million-Length Sequence ModelingCode0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
Ask Your Neurons: A Deep Learning Approach to Visual Question AnsweringCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming ContentCode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Multimodal Hypothetical Summary for Retrieval-based Multi-image Question AnsweringCode0
Game of Sketches: Deep Recurrent Models of Pictionary-style Word GuessingCode0
FVQ: A Large-Scale Dataset and A LMM-based Method for Face Video Quality AssessmentCode0
Co-attending Regions and Detections with Multi-modal Multiplicative Embedding for VQACode0
Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question AnsweringCode0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
A Joint Sequence Fusion Model for Video Question Answering and RetrievalCode0
Multi-Image Visual Question AnsweringCode0
Fully Authentic Visual Question Answering Dataset from Online CommunitiesCode0
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question AnsweringCode0
MQA: Answering the Question via Robotic ManipulationCode0
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A PlatformsCode0
AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and ResultsCode0
A simple neural network module for relational reasoningCode0
A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering ModelsCode0
Modulating early visual processing by languageCode0
CLIPVQA:Video Quality Assessment via CLIPCode0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question AnsweringCode0
Modeling Relationships in Referential Expressions with Compositional Modular NetworksCode0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical ImagesCode0
A Dataset and Architecture for Visual Reasoning with a Working MemoryCode0
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?Code0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring ExpressionsCode0
Show:102550
← PrevPage 16 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified