SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 851900 of 2167 papers

TitleStatusHype
CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over ImagesCode0
Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question AnsweringCode0
Simple Baseline for Visual Question AnsweringCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering PairsCode0
CLEAR: A Dataset for Compositional Language and Elementary Acoustic ReasoningCode0
D3: Data Diversity Design for Systematic Generalization in Visual Question AnsweringCode0
Inferring and Executing Programs for Visual ReasoningCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
Enhancing Interpretability and Interactivity in Robot Manipulation: A Neurosymbolic ApproachCode0
Multi-Image Visual Question AnsweringCode0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
FigureQA: An Annotated Figure Dataset for Visual ReasoningCode0
Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question AnsweringCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
OmniNet: A unified architecture for multi-modal multi-task learningCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Federated Document Visual Question Answering: A Pilot StudyCode0
Modulating early visual processing by languageCode0
Modeling Relationships in Referential Expressions with Compositional Modular NetworksCode0
StarVQA: Space-Time Attention for Video Quality AssessmentCode0
Factor Graph AttentionCode0
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and CompositionCode0
Active Learning for Visual Question Answering: An Empirical StudyCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Are VLMs Really BlindCode0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Exploring the Potential of Encoder-free Architectures in 3D LMMsCode0
Subjective and Objective Quality Assessment of High-Motion Sports Videos at Low-BitratesCode0
Exploring the Effectiveness of Video Perceptual Representation in Blind Video Quality AssessmentCode0
Mimic and Fool: A Task Agnostic Adversarial AttackCode0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
Exploring Models and Data for Image Question AnsweringCode0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
Are Red Roses Red? Evaluating Consistency of Question-Answering ModelsCode0
Measuring Faithful and Plausible Visual Grounding in VQACode0
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Explainable and Explicit Visual Reasoning over Scene GraphsCode0
Bayesian Low-Rank LeArning (Bella): A Practical Approach to Bayesian Neural NetworksCode0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Cascaded Mutual Modulation for Visual ReasoningCode0
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about ActionsCode0
CARETS: A Consistency And Robustness Evaluative Test Suite for VQACode0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Show:102550
← PrevPage 18 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified