SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 10011050 of 2167 papers

TitleStatusHype
Measuring Faithful and Plausible Visual Grounding in VQACode0
EaSe: A Diagnostic Tool for VQA based on Answer DiversityCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Learning to Count Objects in Natural Images for Visual Question AnsweringCode0
Dynamic Memory Networks for Visual and Textual Question AnsweringCode0
Targeted Visual Prompting for Medical Visual Question AnsweringCode0
Dynamic Key-value Memory Enhanced Multi-step Graph Reasoning for Knowledge-based Visual Question AnsweringCode0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Breaking Annotation Barriers: Generalized Video Quality Assessment via Ranking-based Self-SupervisionCode0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
DVQA: Understanding Data Visualizations via Question AnsweringCode0
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question AnsweringCode0
M^2ConceptBase: A Fine-Grained Aligned Concept-Centric Multimodal Knowledge BaseCode0
LXMERT Model Compression for Visual Question AnsweringCode0
μ-Bench: A Vision-Language Benchmark for Microscopy UnderstandingCode0
DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual DialogueCode0
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
Dual Recurrent Attention Units for Visual Question AnsweringCode0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a Class-imbalance ViewCode0
Lightweight Recurrent Cross-modal Encoder for Video Question AnsweringCode0
LPF: A Language-Prior Feedback Objective Function for De-biased Visual Question AnsweringCode0
Logical Implications for Visual Question Answering ConsistencyCode0
Dual Attention Networks for Visual Reference Resolution in Visual DialogCode0
Locally Smoothed Neural NetworksCode0
LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic SurgeryCode0
Dual Attention Networks for Multimodal Reasoning and MatchingCode0
LMM-VQA: Advancing Video Quality Assessment with Large Multimodal ModelsCode0
Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue ReasoningCode0
Mimic and Fool: A Task Agnostic Adversarial AttackCode0
Simple Baseline for Visual Question AnsweringCode0
Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision0
Learning Visual Knowledge Memory Networks for Visual Question Answering0
D-Rax: Domain-specific Radiologic assistant leveraging multi-modal data and eXpert model predictions0
An Evaluation of Image-Based Verb Prediction Models against Human Eye-Tracking Data0
Learning to Specialize with Knowledge Distillation for Visual Question Answering0
Learning to Select Question-Relevant Relations for Visual Question Answering0
Learning to Recognize the Unseen Visual Predicates0
Neural Reasoning, Fast and Slow, for Video Question Answering0
DoReMi: Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment0
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios0
An Evaluation of GPT-4V and Gemini in Online VQA0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
Learning to Disambiguate by Asking Discriminative Questions0
Domain-robust VQA with diverse datasets and methods but no target labels0
Do Explanations make VQA Models more Predictable to a Human?0
Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering0
Learning to Compose Diversified Prompts for Image Emotion Classification0
Show:102550
← PrevPage 21 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified