SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 551600 of 2167 papers

TitleStatusHype
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Are Bias Mitigation Techniques for Deep Learning Effective?Code1
Learning to Discretely Compose Reasoning Module Networks for Video CaptioningCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
End-to-end Document Recognition and Understanding with DessurtCode1
Let Androids Dream of Electric Sheep: A Human-like Image Implication Understanding and Reasoning FrameworkCode1
End-to-end Knowledge Retrieval with Multi-modal QueriesCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question AnsweringCode1
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMsCode1
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal ModelsCode1
Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency TrainingCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Dual-Key Multimodal Backdoors for Visual Question AnsweringCode1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
MapQA: A Dataset for Question Answering on Choropleth MapsCode1
MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM IntegrationCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
FAVER: Blind Quality Prediction of Variable Frame Rate VideosCode1
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document UnderstandingCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation ModelsCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
Mimic In-Context Learning for Multimodal TasksCode1
Exploring Opinion-unaware Video Quality Assessment with Semantic Affinity CriterionCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
MISS: A Generative Pretraining and Finetuning Approach for Med-VQACode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
MedBLIP: Bootstrapping Language-Image Pre-training from 3D Medical Images and TextsCode1
MMBERT: Multimodal BERT Pretraining for Improved Medical VQACode1
An Evaluation of Image-Based Verb Prediction Models against Human Eye-Tracking Data0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
D-Rax: Domain-specific Radiologic assistant leveraging multi-modal data and eXpert model predictions0
Grounding Chest X-Ray Visual Question Answering with Generated Radiology Reports0
DoReMi: Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment0
An Evaluation of GPT-4V and Gemini in Online VQA0
Grounding Answers for Visual Questions Asked by Visually Impaired People0
Grounding Complex Navigational Instructions Using Scene Graphs0
Domain-robust VQA with diverse datasets and methods but no target labels0
Do Explanations make VQA Models more Predictable to a Human?0
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified