SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 451500 of 2167 papers

TitleStatusHype
eP-ALM: Efficient Perceptual Augmentation of Language ModelsCode1
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-AttentionCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
Large-Scale Adversarial Training for Vision-and-Language Representation LearningCode1
End-to-end Knowledge Retrieval with Multi-modal QueriesCode1
Deep Multimodal Neural Architecture SearchCode1
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
LaTr: Layout-Aware Transformer for Scene-Text VQACode1
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity DatasetCode1
LIVE: Learnable In-Context Vector for Visual Question AnsweringCode1
Skipping Computations in Multimodal LLMsCode1
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
End-to-end Document Recognition and Understanding with DessurtCode1
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document UnderstandingCode1
Sparse Continuous Distributions and Fenchel-Young LossesCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual ModelingCode1
DeVLBert: Learning Deconfounded Visio-Linguistic RepresentationsCode1
StableVQA: A Deep No-Reference Quality Assessment Model for Video StabilityCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation ModelsCode1
Learning Situation Hyper-Graphs for Video Question AnsweringCode1
Learning to Answer Questions in Dynamic Audio-Visual ScenariosCode1
Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question AnsweringCode1
Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual ReasoningCode1
An Empirical Study of Multimodal Model MergingCode1
Surgical-VQA: Visual Question Answering in Surgical Scenes using TransformerCode1
Mimic In-Context Learning for Multimodal TasksCode1
Learning to Discretely Compose Reasoning Module Networks for Video CaptioningCode1
An Empirical Study of Training End-to-End Vision-and-Language TransformersCode1
LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers ContentCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
MedBLIP: Bootstrapping Language-Image Pre-training from 3D Medical Images and TextsCode1
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question AnsweringCode1
Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical AlignmentCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical TasksCode1
Less is More: ClipBERT for Video-and-Language Learning via Sparse SamplingCode1
MedCoT: Medical Chain of Thought via Hierarchical ExpertCode1
Light-VQA+: A Video Quality Assessment Model for Exposure Correction with Vision-Language GuidanceCode1
DocFormerv2: Local Features for Document UnderstandingCode1
TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image UnderstandingCode1
MD-VQA: Multi-Dimensional Quality Assessment for UGC Live VideosCode1
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question AnsweringCode1
3DMIT: 3D Multi-modal Instruction Tuning for Scene UnderstandingCode1
DocVQA: A Dataset for VQA on Document ImagesCode1
MDETR -- Modulated Detection for End-to-End Multi-Modal UnderstandingCode1
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question AnsweringCode1
Show:102550
← PrevPage 10 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified