SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 851900 of 2167 papers

TitleStatusHype
Deep Equilibrium Multimodal Fusion0
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image UnderstandingCode2
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question AnsweringCode0
Shikra: Unleashing Multimodal LLM's Referential Dialogue MagicCode2
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
FunQA: Towards Surprising Video ComprehensionCode1
Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck0
StarVQA+: Co-training Space-Time Attention for Video Quality AssessmentCode0
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories0
COSA: Concatenated Sample Pretrained Vision-Language Foundation ModelCode1
Improving Selective Visual Question Answering by Learning from Your PeersCode1
Scalable Neural-Probabilistic Answer Set ProgrammingCode1
Visual Question Answering (VQA) on Images with Superimposed Text0
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent0
Weakly Supervised Visual Question Answer Generation0
Modular Visual Question Answering via Code GenerationCode1
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewardsCode1
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!Code1
Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes0
DocFormerv2: Local Features for Document UnderstandingCode1
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models0
Revisiting the Role of Language Priors in Vision-Language ModelsCode1
Evaluating the Capabilities of Multi-modal Reasoning Models with Synthetic Task Data0
Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question AnsweringCode1
LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing0
End-to-end Knowledge Retrieval with Multi-modal QueriesCode1
Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training0
Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models0
Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA0
Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge0
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and DatasetCode2
PaLI-X: On Scaling up a Multilingual Vision and Language ModelCode1
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa LanguageCode0
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language TransformersCode1
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos0
Zero-shot Visual Question Answering with Language Model FeedbackCode0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving ScenarioCode2
Measuring Faithful and Plausible Visual Grounding in VQACode0
Transferring Visual Attributes from Natural Language to Verified Image Generation0
Image Manipulation via Multi-Hop Instructions -- A New Dataset and Weakly-Supervised Neuro-Symbolic Approach0
DUBLIN -- Document Understanding By Language-Image Network0
Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design0
VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending0
Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted ApproachCode1
Show:102550
← PrevPage 18 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified