SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 10011050 of 2167 papers

TitleStatusHype
Gender and Racial Bias in Visual Question Answering Datasets0
Gemini Pro Defeated by GPT-4V: Evidence from Education0
COCO is "ALL'' You Need for Visual Instruction Fine-tuning0
Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources0
Learning to Disambiguate by Asking Discriminative Questions0
GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning0
GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis0
GC-KBVQA: A New Four-Stage Framework for Enhancing Knowledge Based Visual Question Answering Performance0
Asking questions on handwritten document collections0
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios0
CoBIT: A Contrastive Bi-directional Image-Text Generation Model0
Learning to Recognize the Unseen Visual Predicates0
FVQA: Fact-based Visual Question Answering0
Learning to Specialize with Knowledge Distillation for Visual Question Answering0
FVQA 2.0: Introducing Adversarial Samples into Fact-based Visual Question Answering0
MIMOQA: Multimodal Input Multimodal Output Question Answering0
Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering0
Fusion of Detected Objects in Text for Visual Question Answering0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment0
FunBench: Benchmarking Fundus Reading Skills of MLLMs0
Asking More Informative Questions for Grounded Retrieval0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
Full-reference Video Quality Assessment for User Generated Content Transcoding0
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models0
From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation0
MF2-MVQA: A Multi-stage Feature Fusion method for Medical Visual Question Answering0
CRIC: A VQA Dataset for Compositional Reasoning on Vision and Commonsense0
Abduction of Domain Relationships from Data for VQA0
From Strings to Things: Knowledge-Enabled VQA Model That Can Read and Reason0
MGA-VQA: Multi-Granularity Alignment for Visual Question Answering0
From Shallow to Deep: Compositional Reasoning over Graphs for Visual Question Answering0
From Pixels to Objects: Cubic Visual Attention for Visual Question Answering0
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering0
From Pixels to Graphs: using Scene and Knowledge Graphs for HD-EPIC VQA Challenge0
From Known to the Unknown: Transferring Knowledge to Answer Questions about Novel Visual and Semantic Concepts0
Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?0
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities0
CLIP-UP: CLIP-Based Unanswerable Problem Detection for Visual Question Answering0
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks0
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data0
Memory Augmented Neural Networks for Natural Language Processing0
MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM0
Free Form Medical Visual Question Answering in Radiology0
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment0
A Short Survey of Systematic Generalization0
FOVQA: Blind Foveated Video Quality Assessment0
A Shared Task on Multimodal Machine Translation and Crosslingual Image Description0
Show:102550
← PrevPage 21 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified