SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 701725 of 2167 papers

TitleStatusHype
Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile Gaussian Feature Fields0
ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
VGAT: A Cancer Survival Analysis Framework Transitioning from Generative Visual Question Answering to Genomic ReconstructionCode0
DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels0
Where is this coming from? Making groundedness count in the evaluation of Document VQA models0
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
A Vision Centric Remote Sensing Benchmark0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
TruthLens:A Training-Free Paradigm for DeepFake Detection0
ChatBEV: A Visual Language Model that Understands BEV Maps0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
DynRsl-VLM: Enhancing Autonomous Driving Perception with Dynamic Resolution Vision-Language Models0
Astrea: A MOE-based Visual Understanding Model with Progressive Alignment0
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery0
ComicsPAP: understanding comic strips by picking the correct panel0
Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework0
Bring Remote Sensing Object Detect Into Nature Language Model: Using SFT Method0
Robusto-1 Dataset: Comparing Humans and VLMs on real out-of-distribution Autonomous Driving VQA from Peru0
CalliReader: Contextualizing Chinese Calligraphy via an Embedding-Aligned Vision-Language Model0
MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering0
Show:102550
← PrevPage 29 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified