SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 601650 of 2167 papers

TitleStatusHype
Does my multimodal model learn cross-modal interactions? It's harder to tell than you might think!0
AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations?0
Document Visual Question Answering Challenge 20200
An Empirical Study on the Language Modal in Visual Question Answering0
Document Collection Visual Question Answering0
A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis0
Hyper-dimensional computing for a visual question-answering system that is trainable end-to-end0
Hypo3D: Exploring Hypothetical Reasoning in 3D0
Document AI: Benchmarks, Models and Applications0
An Empirical Study on the Generalization Power of Neural Representations Learned via Visual Guessing Games0
Binding Touch to Everything: Learning Unified Multimodal Tactile Representations0
Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback0
A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis0
HVS Revisited: A Comprehensive Video Quality Assessment Framework0
Diversity and Consistency: Exploring Visual Question-Answer Pair Generation0
Advancing Video Quality Assessment for AIGC0
Distraction-free Embeddings for Robust VQA0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Beyond VQA: Generating Multi-word Answer and Rationale to Visual Questions0
An Empirical Study on Leveraging Scene Graphs for Visual Question Answering0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Directional Gradient Projection for Robust Fine-Tuning of Foundation Models0
Beyond the Hype: A dispassionate look at vision-language models in medical scenario0
DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels0
DiffVQA: Video Quality Assessment Using Diffusion Feature Extractor0
Advancing Surgical VQA with Scene Graph Knowledge0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Hyperbolic Attention Networks0
How to find a good image-text embedding for remote sensing visual question answering?0
How Transferable are Reasoning Patterns in VQA?0
Advancing Multimodal Medical Capabilities of Gemini0
How to Design Sample and Computationally Efficient VQA Models0
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
Differentiable End-to-End Program Executor for Sample and Computationally Efficient VQA0
DIEM: Decomposition-Integration Enhancing Multimodal Insights0
Beyond Human Vision: The Role of Large Vision Language Models in Microscope Image Analysis0
Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning0
How (not) to ensemble LVLMs for VQA0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Advancing Large Multi-modal Models with Explicit Chain-of-Reasoning and Visual Question Generation0
Detecting Multimodal Situations with Insufficient Context and Abstaining from Baseless Predictions0
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
An Empirical Study of Batch Normalization and Group Normalization in Conditional Computation0
How good are deep models in understanding the generated images?0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
DePlot: One-shot visual language reasoning by plot-to-table translation0
What BERT Sees: Cross-Modal Transfer for Visual Question Generation0
Show:102550
← PrevPage 13 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified