SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 351400 of 2167 papers

TitleStatusHype
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
Greedy Gradient Ensemble for Robust Visual Question AnsweringCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
GRIT: General Robust Image Task BenchmarkCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
Combo of Thinking and Observing for Outside-Knowledge VQACode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
Hierarchical Conditional Relation Networks for Video Question AnsweringCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Many Heads but One Brain: Fusion Brain -- a Competition and a Single Multimodal Multitask ArchitectureCode1
MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training ModelCode1
MapQA: A Dataset for Question Answering on Choropleth MapsCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical TasksCode1
MedBLIP: Bootstrapping Language-Image Pre-training from 3D Medical Images and TextsCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation ModelsCode1
Meta-Learning via Classifier(-free) Diffusion GuidanceCode1
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual ConceptsCode1
Cross-Modality Relevance for Reasoning on Language and VisionCode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
Consistency-preserving Visual Question Answering in Medical ImagingCode1
Content-Rich AIGC Video Quality Assessment via Intricate Text Alignment and Motion-Aware ConsistencyCode1
Deep Multimodal Neural Architecture SearchCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Contrast and Classify: Training Robust VQA ModelsCode1
2BiVQA: Double Bi-LSTM based Video Quality Assessment of UGC VideosCode1
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
FunQA: Towards Surprising Video ComprehensionCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
MMUnlearner: Reformulating Multimodal Machine Unlearning in the Era of Multimodal Large Language ModelsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
Counterfactual Samples Synthesizing and Training for Robust Visual Question AnsweringCode1
Counterfactual Samples Synthesizing for Robust Visual Question AnsweringCode1
A-OKVQA: A Benchmark for Visual Question Answering using World KnowledgeCode1
Counterfactual VQA: A Cause-Effect Look at Language BiasCode1
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data SynthesisCode1
GeneAnnotator: A Semi-automatic Annotation Tool for Visual Scene GraphCode1
FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food CultureCode1
Can I Trust Your Answer? Visually Grounded Video Question AnsweringCode1
Show:102550
← PrevPage 8 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified