SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 901950 of 2167 papers

TitleStatusHype
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Bilaterally Slimmable Transformer for Elastic and Efficient Visual Question AnsweringCode0
Modeling Relationships in Referential Expressions with Compositional Modular NetworksCode0
Modulating early visual processing by languageCode0
Is Multimodal Vision Supervision Beneficial to Language?Code0
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
Exploring Models and Data for Image Question AnsweringCode0
Are Red Roses Red? Evaluating Consistency of Question-Answering ModelsCode0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Mimic and Fool: A Task Agnostic Adversarial AttackCode0
Explainable and Explicit Visual Reasoning over Scene GraphsCode0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
Cascaded Mutual Modulation for Visual ReasoningCode0
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about ActionsCode0
CARETS: A Consistency And Robustness Evaluative Test Suite for VQACode0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
Transformer Module Networks for Systematic Generalization in Visual Question AnsweringCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Medical Large Vision Language Models with Multi-Image Visual AbilityCode0
A Question-Centric Model for Visual Question Answering in Medical ImagingCode0
Applying recent advances in Visual Question Answering to Record LinkageCode0
Delving Deeper into Cross-lingual Visual Question AnsweringCode0
A Dual-Attention Learning Network with Word and Sentence Embedding for Medical Visual Question AnsweringCode0
Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task LearningCode0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital EnvironmentsCode0
μ-Bench: A Vision-Language Benchmark for Microscopy UnderstandingCode0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Measuring Faithful and Plausible Visual Grounding in VQACode0
Multimodal Residual Learning for Visual QACode0
LXMERT Model Compression for Visual Question AnsweringCode0
M^2ConceptBase: A Fine-Grained Aligned Concept-Centric Multimodal Knowledge BaseCode0
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text RepresentationsCode0
Enhancing the AI2 Diagrams Dataset Using Rhetorical Structure TheoryCode0
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question AnsweringCode0
Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue ReasoningCode0
Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a Class-imbalance ViewCode0
KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean LanguageCode0
Locally Smoothed Neural NetworksCode0
Logical Implications for Visual Question Answering ConsistencyCode0
Kvasir-VQA: A Text-Image Pair GI Tract DatasetCode0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
LPF: A Language-Prior Feedback Objective Function for De-biased Visual Question AnsweringCode0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
LMM-VQA: Advancing Video Quality Assessment with Large Multimodal ModelsCode0
Answer Them All! Toward Universal Visual Question Answering ModelsCode0
LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic SurgeryCode0
End-to-end optimization of goal-driven and visually grounded dialogue systemsCode0
End-to-End Instance Segmentation with Recurrent AttentionCode0
Answer Questions with Right Image Regions: A Visual Attention Regularization ApproachCode0
Show:102550
← PrevPage 19 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified