SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 501525 of 2167 papers

TitleStatusHype
Expectation-Maximization Contrastive Learning for Compact Video-and-Language RepresentationsCode1
3DMIT: 3D Multi-modal Instruction Tuning for Scene UnderstandingCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connectionsCode1
Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense CaptionerCode1
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQACode1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
MapQA: A Dataset for Question Answering on Choropleth MapsCode1
MixGen: A New Multi-Modal Data AugmentationCode1
eP-ALM: Efficient Perceptual Augmentation of Language ModelsCode1
MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question AnsweringCode1
MLP Architectures for Vision-and-Language Modeling: An Empirical StudyCode1
Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only TrainingCode1
Break It Down: A Question Understanding BenchmarkCode1
MISS: A Generative Pretraining and Finetuning Approach for Med-VQACode1
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document UnderstandingCode1
End-to-end Knowledge Retrieval with Multi-modal QueriesCode1
MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form Video Question AnsweringCode1
MMBERT: Multimodal BERT Pretraining for Improved Medical VQACode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical AlignmentCode1
Mimic In-Context Learning for Multimodal TasksCode1
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question AnsweringCode1
Show:102550
← PrevPage 21 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified