SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 251300 of 2167 papers

TitleStatusHype
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
Instruction-Guided Visual MaskingCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Meta-Learning via Classifier(-free) Diffusion GuidanceCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
AssistQ: Affordance-centric Question-driven Task Completion for Egocentric AssistantCode1
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMsCode1
HYDRA: A Hyper Agent for Dynamic Compositional Visual ReasoningCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic ReasoningCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic ReasoningCode1
CLEVR-Math: A Dataset for Compositional Language, Visual and Mathematical ReasoningCode1
Align and Prompt: Video-and-Language Pre-training with Entity PromptsCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
GRIT: General Robust Image Task BenchmarkCode1
Greedy Gradient Ensemble for Robust Visual Question AnsweringCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
Classification-Regression for Chart ComprehensionCode1
AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and ResultsCode1
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual ReasoningCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMMCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
ActiView: Evaluating Active Perception Ability for Multimodal Large Language ModelsCode1
AI2-THOR: An Interactive 3D Environment for Visual AICode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
Clover: Towards A Unified Video-Language Alignment and Fusion ModelCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
GeneAnnotator: A Semi-automatic Annotation Tool for Visual Scene GraphCode1
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense ReasoningCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
FunQA: Towards Surprising Video ComprehensionCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Generative Bias for Robust Visual Question AnsweringCode1
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language ModelsCode1
Are Vision Language Models Ready for Clinical Diagnosis? A 3D Medical Benchmark for Tumor-centric Visual Question AnsweringCode1
FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food CultureCode1
Check It Again:Progressive Visual Question Answering via Visual EntailmentCode1
Found a Reason for me? Weakly-supervised Grounded Visual Question Answering using CapsulesCode1
Show:102550
← PrevPage 6 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified