SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 201250 of 2167 papers

TitleStatusHype
A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQACode1
A Dataset and Baselines for Visual Question Answering on ArtCode1
AMD-Hummingbird: Towards an Efficient Text-to-Video ModelCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language TransformersCode1
UniTAB: Unifying Text and Box Outputs for Grounded Vision-Language ModelingCode1
Debiasing Multimodal Models via Causal Information MinimizationCode1
Analysis of Video Quality Datasets via Design of Minimalistic Video Quality ModelsCode1
Debiased Visual Question Answering from Feature and Sample PerspectivesCode1
Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and GrounderCode1
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
Awaker2.5-VL: Stably Scaling MLLMs with Parameter-Efficient Mixture of ExpertsCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractionsCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Greedy Gradient Ensemble for Robust Visual Question AnsweringCode1
In Defense of Grid Features for Visual Question AnsweringCode1
Visual Grounding Methods for VQA are Working for the Wrong Reasons!Code1
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
An Empirical Analysis on Spatial Reasoning Capabilities of Large Multimodal ModelsCode1
2BiVQA: Double Bi-LSTM based Video Quality Assessment of UGC VideosCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Attention in Reasoning: Dataset, Analysis, and ModelingCode1
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity DatasetCode1
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual ModelingCode1
Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation EmbeddingCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question AnsweringCode1
GRIT: General Robust Image Task BenchmarkCode1
An Empirical Study of Multimodal Model MergingCode1
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context LearningCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question AnsweringCode1
Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic ReasoningCode1
Learning to Answer Visual Questions from Web VideosCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQACode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and Motion PerceptionCode1
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairsCode1
Attention-Based Context Aware Reasoning for Situation RecognitionCode1
Show:102550
← PrevPage 5 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified