SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 12511300 of 2177 papers

TitleStatusHype
Towards Unifying Medical Vision-and-Language Pre-training via Soft PromptsCode1
Multimodal Federated Learning via Contrastive Representation EnsembleCode1
Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
Is Multimodal Vision Supervision Beneficial to Language?Code0
Language Quantized AutoEncoders: Towards Unsupervised Text-Image AlignmentCode1
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and VideoCode4
Multimodality Representation Learning: A Survey on Evolution, Pretraining and Its ApplicationsCode1
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language ModelsCode4
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA ModelsCode0
Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Champion Solution for the WSDM2023 Toloka VQA ChallengeCode3
Towards Models that Can See and Read0
Curriculum Script Distillation for Multilingual Visual Question Answering0
SlideVQA: A Dataset for Document Visual Question Answering on Multiple ImagesCode1
Multimodal Inverse Cloze Task for Knowledge-based Visual Question AnsweringCode1
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
PromptCap: Prompt-Guided Image Captioning for VQA with GPT-30
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering0
Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical KnowledgeCode0
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks0
Exploring the Effect of Primitives for Compositional Generalization in Vision-and-LanguageCode0
RMLVQA: A Margin Loss Approach for Visual Question Answering With Language Biases0
VQACL: A Novel Visual Question Answering Continual Learning SettingCode1
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
UnICLAM:Contrastive Representation Learning with Adversarial Masking for Unified and Interpretable Medical Vision Question Answering0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?0
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
CLIPPO: Image-and-Language Understanding from Pixels Only0
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge MemoryCode0
ParsVQA-Caps: A Benchmark for Visual Question Answering and Image Captioning in Persian0
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Visual Question Answering From Another Perspective: CLEVR Mental Rotation TestsCode0
Compound Tokens: Channel Fusion for Vision-Language Representation Learning0
Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual ReasoningCode1
Optimizing Explanations by Network Canonization and Hyperparameter Search0
PiggyBack: Pretrained Visual Question Answering Environment for Backing up Non-deep Learning Professionals0
Neuro-Symbolic Spatio-Temporal Reasoning0
Seeing What You Miss: Vision-Language Pre-training with Semantic Completion LearningCode1
Self-supervised vision-language pretraining for Medical visual question answeringCode1
Look, Read and Ask: Learning to Ask Questions by Reading Text in Images0
Cross-Modal Contrastive Learning for Robust Reasoning in VQACode0
CL-CrossVQA: A Continual Learning Benchmark for Cross-Domain Visual Question Answering0
Visual Programming: Compositional visual reasoning without trainingCode2
Text-Aware Dual Routing Network for Visual Question Answering0
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Show:102550
← PrevPage 26 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified