SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 751800 of 2167 papers

TitleStatusHype
Alignment, Mining and Fusion: Representation Alignment with Hard Negative Mining and Selective Knowledge Fusion for Medical Visual Question Answering0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain Knowledge-Based VQA0
Knowledge-Based Visual Question Answering in Videos0
HD-EPIC: A Highly-Detailed Egocentric Video Dataset0
Knowledge Condensation and Reasoning for Knowledge-based VQA0
Attention Mechanism based Cognition-level Scene Understanding0
Hardware-Friendly Static Quantization Method for Video Diffusion Transformers0
HAMMR: HierArchical MultiModal React agents for generic VQA0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning0
Language bias in Visual Question Answering: A Survey and Taxonomy0
LAPDoc: Layout-Aware Prompting for Documents0
Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision0
Attention Guided Semantic Relationship Parsing for Visual Question Answering0
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
KAT: A Knowledge Augmented Transformer for Vision-and-Language0
Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention0
HAUR: Human Annotation Understanding and Recognition Through Text-Heavy Images0
JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems0
Kernel Pooling for Convolutional Neural Networks0
Guiding Visual Question Generation0
HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos0
AlignVE: Visual Entailment Recognition Based on Alignment Relations0
Guiding Visual Question Answering with Attention Priors0
Connecting phases of matter to the flatness of the loss landscape in analog variational quantum algorithms0
Connecting Language and Vision to Actions0
Guiding Medical Vision-Language Models with Explicit Visual Prompts: Framework Design and Comprehensive Exploration of Prompt Variations0
Hierarchical Memory for Long Video QA0
Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion0
A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 20210
Joint Image Captioning and Question Answering0
Grounding Complex Navigational Instructions Using Scene Graphs0
Grounding Chest X-Ray Visual Question Answering with Generated Radiology Reports0
Highly Efficient No-reference 4K Video Quality Assessment with Full-Pixel Covering Sampling and Training Strategy0
Grounding Answers for Visual Questions Asked by Visually Impaired People0
HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training0
A Token-level Text Image Foundation Model for Document Understanding0
How good are deep models in understanding the generated images?0
Joint learning of object graph and relation graph for visual question answering0
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
Compressing Visual-linguistic Model via Knowledge Distillation0
Grounded Word Sense Translation0
It Takes Two to Tango: Towards Theory of AI's Mind0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
How to Design Sample and Computationally Efficient VQA Models0
Co-VQA : Answering by Interactive Sub Question Sequence0
A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering0
iVQA: Inverse Visual Question Answering0
Show:102550
← PrevPage 16 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified