SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 21012150 of 2167 papers

TitleStatusHype
Focal Visual-Text Attention for Visual Question AnsweringCode0
Focal Visual-Text Attention for Memex Question AnsweringCode0
Context-VQA: Towards Context-Aware and Purposeful Visual Question AnsweringCode0
A Diagram Is Worth A Dozen ImagesCode0
A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering ModelsCode0
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
Contextual Dropout: An Efficient Sample-Dependent Dropout ModuleCode0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
Select, Substitute, Search: A New Benchmark for Knowledge-Augmented Visual Question AnsweringCode0
Self-Critical Reasoning for Robust Visual Question AnsweringCode0
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
Uncovering the Full Potential of Visual Grounding Methods in VQACode0
Self Supervision for Attention NetworksCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
Analyzing Modular Approaches for Visual Question DecompositionCode0
Semantically Distributed Robust Optimization for Vision-and-Language InferenceCode0
Semantically Equivalent Adversarial Rules for Debugging NLP modelsCode0
Adaptive loose optimization for robust question answeringCode0
FigureQA: An Annotated Figure Dataset for Visual ReasoningCode0
SemiHVision: Enhancing Medical Multimodal Models with a Semi-Human Annotated Dataset and Fine-Tuned Instruction GenerationCode0
Understanding Attention for Vision-and-Language TasksCode0
Understanding Guided Image Captioning Performance across DomainsCode0
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
An Entropy Clustering Approach for Assessing Visual Question DifficultyCode0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
ShapeWorld - A new test methodology for multimodal language understandingCode0
Visual Question Answering: A Survey of Methods and DatasetsCode0
Federated Document Visual Question Answering: A Pilot StudyCode0
Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question AnsweringCode0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Siamese Tracking with Lingual Object ConstraintsCode0
Why Did the Chicken Cross the Road? Rephrasing and Analyzing Ambiguous Questions in VQACode0
Simple Baseline for Visual Question AnsweringCode0
Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question AnsweringCode0
ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question AnsweringCode0
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about ActionsCode0
Factor Graph AttentionCode0
12-in-1: Multi-Task Vision and Language Representation LearningCode0
VQA Therapy: Exploring Answer Differences by Visually Grounding AnswersCode0
Single-Stream Multi-Level Alignment for Vision-Language PretrainingCode0
Exploring the Potential of Encoder-free Architectures in 3D LMMsCode0
Why do These Match? Explaining the Behavior of Image Similarity ModelsCode0
Exploring the Effectiveness of Video Perceptual Representation in Blind Video Quality AssessmentCode0
Visual Question Answering: Datasets, Algorithms, and Future ChallengesCode0
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
Exploring Models and Data for Image Question AnsweringCode0
SlotPi: Physics-informed Object-centric Reasoning ModelsCode0
Show:102550
← PrevPage 43 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified