SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 18011825 of 2167 papers

TitleStatusHype
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
NegVQA: Can Vision Language Models Understand Negation?0
Neural Attention Models for Sequence Classification: Analysis and Application to Key Term Extraction and Dialogue Act Detection0
Neural Memory Plasticity for Anomaly Detection0
Neural Self Talk: Image Understanding via Continuous Questioning and Answering0
NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA0
Neuro-Symbolic Spatio-Temporal Reasoning0
Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning"0
Neuro-Symbolic VQA: A review from the perspective of AGI desiderata0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
NEWSKVQA: Knowledge-Aware News Video Question Answering0
NMT-Keras: a Very Flexible Toolkit with a Focus on Interactive NMT and Online Learning0
Non-monotonic Logical Reasoning Guiding Deep Learning for Explainable Visual Question Answering0
KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild0
Normalized and Geometry-Aware Self-Attention Network for Image Captioning0
Not all Views are Created Equal: Analyzing Viewpoint Instabilities in Vision Foundation Models0
NoTeS-Bank: Benchmarking Neural Transcription and Search for Scientific Notes Understanding0
Not-So-CLEVR: Visual Relations Strain Feedforward Neural Networks0
NTIRE 2023 Quality Assessment of Video Enhancement Challenge0
NTIRE 2024 Quality Assessment of AI-Generated Content Challenge0
Object-based reasoning in VQA0
Object-Centric Diagnosis of Visual Reasoning0
Off-Policy Evaluation for Human Feedback0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
Show:102550
← PrevPage 73 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified