SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 15511600 of 2167 papers

TitleStatusHype
UNITER: Learning UNiversal Image-TExt Representations0
Un jeu de données pour répondre à des questions visuelles à propos d’entités nommées en utilisant des bases de connaissances (ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named Entities)0
Unleashing the Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario0
Unshuffling Data for Improved Generalization0
Unshuffling Data for Improved Generalization in Visual Question Answering0
Unsupervised Keyword Extraction for Full-sentence VQA0
Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment0
Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models0
V^2Dial: Unification of Video and Visual Dialog via Multimodal Experts0
V^2Dial: Unification of Video and Visual Dialog via Multimodal Experts0
VALSE: A Task-Independent Benchmark for Vision and Language Models centered on Linguistic Phenomena0
Variational Disentangled Attention for Regularized Visual Dialog0
Variational Visual Question Answering0
V-Doc : Visual questions answers with Documents0
V-Doc: Visual Questions Answers With Documents0
Verb Mirage: Unveiling and Assessing Verb Concept Hallucinations in Multimodal Large Language Models0
VGNMN: Video-grounded Neural Module Networks for Video-Grounded Dialogue Systems0
VGNMN: Video-grounded Neural Module Network to Video-Grounded Language Tasks0
Video Instruction Tuning With Synthetic Data0
Video Quality Assessment Based on Swin TransformerV2 and Coarse to Fine Strategy0
Video Quality Assessment for Online Processing: From Spatial to Temporal Sampling0
Video Question Answering via Attribute-Augmented Attention Network Learning0
Video Question Answering with Iterative Video-Text Co-Tokenization0
Video-Text Dataset Construction from Multi-AI Feedback: Promoting Weak-to-Strong Preference Learning for Video Large Language Models0
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners0
ViLMedic: a framework for research at the intersection of vision and language in medical AI0
Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese0
VisCon-100K: Leveraging Contextual Web Data for Fine-tuning Vision Language Models0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
Vision and Language: from Visual Perception to Content Creation0
Vision and Language Integration: Moving beyond Objects0
Vision-Language Models as Success Detectors0
Vision-Language Pretraining: Current Trends and the Future0
Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are the Bottleneck0
Vision-to-Language Tasks Based on Attributes and Attention Mechanism0
Visual7W: Grounded Question Answering in Images0
Visual Commonsense based Heterogeneous Graph Contrastive Learning0
Visual Entailment: A Novel Task for Fine-Grained Image Understanding0
Visual Entailment Task for Visually-Grounded Language Learning0
Visual Explanations from Hadamard Product in Multimodal Deep Networks0
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation0
Visual Graph Question Answering with ASP and LLMs for Language Parsing0
Visual Grounding Strategies for Text-Only Natural Language Processing0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Visually Guided Spatial Relation Extraction from Text0
Visual Mechanisms Inspired Efficient Transformers for Image and Video Quality Assessment0
Visual Perturbation-aware Collaborative Learning for Overcoming the Language Prior Problem0
Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models0
Show:102550
← PrevPage 32 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified