SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 401450 of 2167 papers

TitleStatusHype
Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-TrainingCode1
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractionsCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense GraphsCode1
A Dataset and Baselines for Visual Question Answering on ArtCode1
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language TransformersCode1
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMsCode1
AMD-Hummingbird: Towards an Efficient Text-to-Video ModelCode1
GRIT: General Robust Image Task BenchmarkCode1
Notes-guided MLLM Reasoning: Enhancing MLLM with Knowledge and Visual Notes for Visual Question AnsweringCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
Detecting Hate Speech in Multi-modal MemesCode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup AnnotationsCode1
DeVLBert: Learning Deconfounded Visio-Linguistic RepresentationsCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Ontology-guided Semantic Composition for Zero-Shot LearningCode1
Open3DVQA: A Benchmark for Comprehensive Spatial Reasoning with Multimodal Large Language Model in Open SpaceCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
In Defense of Grid Features for Visual Question AnsweringCode1
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQACode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
Overcoming Language Priors with Self-supervised Learning for Visual Question AnsweringCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
A-OKVQA: A Benchmark for Visual Question Answering using World KnowledgeCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Pano-AVQA: Grounded Audio-Visual Question Answering on 360^ VideosCode1
ParlAI: A Dialog Research Software PlatformCode1
Passage Retrieval for Outside-Knowledge Visual Question AnsweringCode1
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language ModelsCode1
DataEnvGym: Data Generation Agents in Teacher Environments with Student FeedbackCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
End-to-end Document Recognition and Understanding with DessurtCode1
Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal TransformersCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
Generative Bias for Robust Visual Question AnsweringCode1
Probing Image-Language Transformers for Verb UnderstandingCode1
Debiased Visual Question Answering from Feature and Sample PerspectivesCode1
Debiasing Multimodal Models via Causal Information MinimizationCode1
Declaration-based Prompt Tuning for Visual Question AnsweringCode1
GeneAnnotator: A Semi-automatic Annotation Tool for Visual Scene GraphCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and GrounderCode1
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data SynthesisCode1
Show:102550
← PrevPage 9 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified