SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 20262050 of 2167 papers

TitleStatusHype
STL-CQA: Structure-based Transformers with Localization and Encoding for Chart Question Answering0
Straight to the Facts: Learning Knowledge Base Retrieval for Factual Visual Question Answering0
StructuralLM: Structural Pre-training for Form Understanding0
Structured Two-stream Attention Network for Video Question Answering0
Structure Learning for Neural Module Networks0
Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos0
Study of the effect of Sharpness on Blind Video Quality Assessment0
Subjective and Objective Analysis of Streamed Gaming Videos0
Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality0
Subtleties in the trainability of quantum machine learning models0
Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation0
Supervising the Transfer of Reasoning Patterns in VQA0
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery0
Survey of Recent Advances in Visual Question Answering0
Survey of Visual Question Answering: Datasets and Techniques0
Survey of Visual-Semantic Embedding Methods for Zero-Shot Image Retrieval0
Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework0
Syntax Tree Constrained Graph Network for Visual Question Answering0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
Synthesize Step-by-Step: Tools Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
Tackling VQA with Pretrained Foundation Models without Further Training0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded0
Show:102550
← PrevPage 82 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified