SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 151200 of 2167 papers

TitleStatusHype
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language TransformersCode1
Cross-Modality Relevance for Reasoning on Language and VisionCode1
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractionsCode1
Counterfactual VQA: A Cause-Effect Look at Language BiasCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
An Empirical Study of Multimodal Model MergingCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
COSA: Concatenated Sample Pretrained Vision-Language Foundation ModelCode1
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual ModelingCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Counterfactual Samples Synthesizing and Training for Robust Visual Question AnsweringCode1
Greedy Gradient Ensemble for Robust Visual Question AnsweringCode1
Counterfactual Samples Synthesizing for Robust Visual Question AnsweringCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
An Empirical Analysis on Spatial Reasoning Capabilities of Large Multimodal ModelsCode1
Visual Grounding Methods for VQA are Working for the Wrong Reasons!Code1
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
GRIT: General Robust Image Task BenchmarkCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
Consistency-preserving Visual Question Answering in Medical ImagingCode1
Content-Rich AIGC Video Quality Assessment via Intricate Text Alignment and Motion-Aware ConsistencyCode1
Analysis of Video Quality Datasets via Design of Minimalistic Video Quality ModelsCode1
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual ConceptsCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Generative Bias for Robust Visual Question AnsweringCode1
AMD-Hummingbird: Towards an Efficient Text-to-Video ModelCode1
A Dataset and Baselines for Visual Question Answering on ArtCode1
Compositional Attention Networks for Machine ReasoningCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
FunQA: Towards Surprising Video ComprehensionCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge AlignmentCode1
Contrast and Classify: Training Robust VQA ModelsCode1
2BiVQA: Double Bi-LSTM based Video Quality Assessment of UGC VideosCode1
Combo of Thinking and Observing for Outside-Knowledge VQACode1
Attention in Reasoning: Dataset, Analysis, and ModelingCode1
GeneAnnotator: A Semi-automatic Annotation Tool for Visual Scene GraphCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
Show:102550
← PrevPage 4 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified