SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 16511675 of 2167 papers

TitleStatusHype
VQA with Cascade of Self- and Co-Attention Blocks0
VSA4VQA: Scaling a Vector Symbolic Architecture to Visual Question Answering on Natural Images0
Watching the News: Towards VideoQA Models that can Read0
Weakly Supervised Visual Question Answer Generation0
Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks0
Webly Supervised Concept Expansion for General Purpose Vision Models0
What is needed for simple spatial language capabilities in VQA?0
What Large Language Models Bring to Text-rich VQA?0
What makes a good metric? Evaluating automatic metrics for text-to-image consistency0
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
Where is this coming from? Making groundedness count in the evaluation of Document VQA models0
Where To Look: Focus Regions for Visual Question Answering0
Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering0
Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities0
Why Does a Visual Question Have Different Answers?0
Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference0
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
Workshop on Document Intelligence Understanding0
WSI-LLaVA: A Multimodal Large Language Model for Whole Slide Image0
WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models0
XGPT: Cross-modal Generative Pre-Training for Image Captioning0
xGQA: Cross-Lingual Visual Question Answering0
Yin and Yang: Balancing and Answering Binary Visual Questions0
YouMakeup: A Large-Scale Domain-Specific Multimodal Dataset for Fine-Grained Semantic Comprehension0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
Show:102550
← PrevPage 67 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified