SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 76100 of 2167 papers

TitleStatusHype
TinyDrive: Multiscale Visual Question Answering with Selective Token Routing for Autonomous Driving0
SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision TasksCode0
Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets0
Debating for Better Reasoning: An Unsupervised Multimodal Approach0
Toward Effective Reinforcement Learning Fine-Tuning for Medical VQA in Vision-Language Models0
PlanGPT-VL: Enhancing Urban Planning with Domain-Specific Vision-Language Models0
MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical TasksCode1
TinyRS-R1: Compact Multimodal Language Model for Remote Sensing0
RVTBench: A Benchmark for Visual Reasoning TasksCode0
MedSG-Bench: A Benchmark for Medical Image Sequences Grounding0
Semantically-Aware Game Image Quality Assessment0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
TCC-Bench: Benchmarking the Traditional Chinese Culture Understanding Capabilities of MLLMsCode0
Enhancing Multi-Image Question Answering via Submodular Subset Selection0
Variational Visual Question Answering0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
MM-Skin: Enhancing Dermatology Vision-Language Model with an Image-Text Dataset Derived from TextbooksCode1
R^3-VQA: "Read the Room" by Video Social Reasoning0
DiffVQA: Video Quality Assessment Using Diffusion Feature Extractor0
Breaking Annotation Barriers: Generalized Video Quality Assessment via Ranking-based Self-SupervisionCode0
AOR: Anatomical Ontology-Guided Reasoning for Medical Large Multimodal Model in Chest X-Ray Interpretation0
Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense EvaluationCode0
Show:102550
← PrevPage 4 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified