SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 14511500 of 2177 papers

TitleStatusHype
Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
Is Multimodal Vision Supervision Beneficial to Language?Code0
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA ModelsCode0
Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Towards Models that Can See and Read0
Curriculum Script Distillation for Multilingual Visual Question Answering0
Adaptively Clustering Neighbor Elements for Image-Text GenerationCode0
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
Exploring the Effect of Primitives for Compositional Generalization in Vision-and-LanguageCode0
Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering0
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks0
RMLVQA: A Margin Loss Approach for Visual Question Answering With Language Biases0
Toward Multi-Granularity Decision-Making: Explicit Visual Reasoning with Hierarchical KnowledgeCode0
PromptCap: Prompt-Guided Image Captioning for VQA with GPT-30
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
UnICLAM:Contrastive Representation Learning with Adversarial Masking for Unified and Interpretable Medical Vision Question Answering0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?0
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
CLIPPO: Image-and-Language Understanding from Pixels Only0
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge MemoryCode0
ParsVQA-Caps: A Benchmark for Visual Question Answering and Image Captioning in Persian0
Visual Question Answering From Another Perspective: CLEVR Mental Rotation TestsCode0
Compound Tokens: Channel Fusion for Vision-Language Representation Learning0
Optimizing Explanations by Network Canonization and Hyperparameter Search0
PiggyBack: Pretrained Visual Question Answering Environment for Backing up Non-deep Learning Professionals0
Neuro-Symbolic Spatio-Temporal Reasoning0
Look, Read and Ask: Learning to Ask Questions by Reading Text in Images0
Cross-Modal Contrastive Learning for Robust Reasoning in VQACode0
CL-CrossVQA: A Continual Learning Benchmark for Cross-Domain Visual Question Answering0
Text-Aware Dual Routing Network for Visual Question Answering0
AlignVE: Visual Entailment Recognition Based on Alignment Relations0
Visually Grounded VQA by Lattice-based RetrievalCode0
MF2-MVQA: A Multi-stage Feature Fusion method for Medical Visual Question Answering0
Towards Reasoning-Aware Explainable VQA0
ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation0
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems0
Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question AnsweringCode0
What's Different between Visual Question Answering for Machine "Understanding" Versus for Accessibility?Code0
Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision0
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data0
Image Semantic Relation Generation0
CPL: Counterfactual Prompt Learning for Vision and Language Models0
Aligning MAGMA by Few-Shot Learning and Finetuning0
Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering0
Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero TrainingCode0
Multi-Modal Fusion Transformer for Visual Question Answering in Remote Sensing0
MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning0
Show:102550
← PrevPage 30 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified