SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 801850 of 2167 papers

TitleStatusHype
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
CP-LLM: Context and Pixel Aware Large Language Model for Video Quality Assessment0
Connecting phases of matter to the flatness of the loss landscape in analog variational quantum algorithms0
CQ-VQA: Visual Question Answering on Categorized Questions0
Connecting Language and Vision to Actions0
Guiding Medical Vision-Language Models with Explicit Visual Prompts: Framework Design and Comprehensive Exploration of Prompt Variations0
A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 20210
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
HVS Revisited: A Comprehensive Video Quality Assessment Framework0
Grounding Complex Navigational Instructions Using Scene Graphs0
Grounding Chest X-Ray Visual Question Answering with Generated Radiology Reports0
Grounding Answers for Visual Questions Asked by Visually Impaired People0
A Token-level Text Image Foundation Model for Document Understanding0
Large Scale Scene Text Verification with Guided Attention0
LEAF-QA: Locate, Encode & Attend for Figure Question Answering0
Compressing Visual-linguistic Model via Knowledge Distillation0
ICDAR 2021 Competition on Document VisualQuestion Answering0
Grounded Word Sense Translation0
LAPDoc: Layout-Aware Prompting for Documents0
A Dataset for Multimodal Question Answering in the Cultural Heritage Domain0
Neural Reasoning, Fast and Slow, for Video Question Answering0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering0
Graph-Structured Representations for Visual Question Answering0
CLIPPO: Image-and-Language Understanding from Pixels Only0
Compound Tokens: Channel Fusion for Vision-Language Representation Learning0
Image Captioning and Visual Question Answering Based on Attributes and External Knowledge0
Image Captioning with Compositional Neural Module Networks0
Image Manipulation via Multi-Hop Instructions -- A New Dataset and Weakly-Supervised Neuro-Symbolic Approach0
Graph Relation Transformer: Incorporating pairwise object features into the Transformer architecture0
Bilinear Graph Networks for Visual Question Answering0
Aligning MAGMA by Few-Shot Learning and Finetuning0
ImageTTR: Grounding Type Theory with Records in Image Classification for Visual Question Answering0
Graph Neural Networks in Vision-Language Image Understanding: A Survey0
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization0
Compositional Memory for Visual Question Answering0
Improved Bilinear Pooling with CNNs0
Graph Edit Distance Reward: Learning to Edit Scene Graph0
Improved Few-Shot Image Classification Through Multiple-Choice Questions0
A survey on VQA_Datasets and Approaches0
Improving and Diagnosing Knowledge-Based Visual Question Answering via Entity Enhanced Knowledge Injection0
Improving Automatic VQA Evaluation Using Large Language Models0
Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning0
Improving Data Augmentation for Robust Visual Question Answering with Effective Curriculum Learning0
Improving Generalization in Visual Reasoning via Self-Ensemble0
A survey on knowledge-enhanced multimodal learning0
Improving mitosis detection on histopathology images using large vision-language models0
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network0
GRAM: Global Reasoning for Multi-Page VQA0
Show:102550
← PrevPage 17 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified