SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 19011925 of 2167 papers

TitleStatusHype
Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering0
Q2ATransformer: Improving Medical VQA via an Answer Querying Decoder0
Q-Boost: On Visual Quality Assessment Ability of Low-level Multi-Modality Foundation Models0
QIRL: Boosting Visual Question Answering via Optimized Question-Image Relation Learning0
QSAN: A Near-term Achievable Quantum Self-Attention Network0
QTG-VQA: Question-Type-Guided Architectural for VideoQA Systems0
Quality Prediction of AI Generated Images and Videos: Emerging Trends and Opportunities0
Question-Agnostic Attention for Visual Question Answering0
Question-Conditioned Counterfactual Image Generation for VQA0
Question-Driven Graph Fusion Network For Visual Question Answering0
Question Generation for Evaluating Cross-Dataset Shifts in Multi-modal Grounding0
Question-Guided Hybrid Convolution for Visual Question Answering0
Question Guided Modular Routing Networks for Visual Question Answering0
Question-Led Semantic Structure Enhanced Attentions for VQA0
Question Modifiers in Visual Question Answering0
Question Relevance in Visual Question Answering0
Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions0
Question Type Guided Attention in Visual Question Answering0
R^3-VQA: "Read the Room" by Video Social Reasoning0
RankDVQA-mini: Knowledge Distillation-Driven Deep Video Quality Assessment0
RAVEN: A Dataset for Relational and Analogical Visual rEasoNing0
RAVEN: Multitask Retrieval Augmented Vision-Language Learning0
Reactive Multi-Stage Feature Fusion for Multimodal Dialogue Modeling0
Realizing Visual Question Answering for Education: GPT-4V as a Multimodal AI0
Reasoning LLMs for User-Aware Multimodal Conversational Agents0
Show:102550
← PrevPage 77 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified