SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 851900 of 2177 papers

TitleStatusHype
Game of Sketches: Deep Recurrent Models of Pictionary-style Word GuessingCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
ShareGPT4V: Improving Large Multi-Modal Models with Better CaptionsCode0
MUTAN: Multimodal Tucker Fusion for Visual Question AnsweringCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Siamese Tracking with Lingual Object ConstraintsCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
Fully Authentic Visual Question Answering Dataset from Online CommunitiesCode0
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A PlatformsCode0
Multimodal Residual Learning for Visual QACode0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and BeyondCode0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
Language Models Meet Anomaly Detection for Better Interpretability and GeneralizabilityCode0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
SparrowVQE: Visual Question Explanation for Course Content UnderstandingCode0
Multi-Image Visual Question AnsweringCode0
Co-attending Regions and Detections with Multi-modal Multiplicative Embedding for VQACode0
Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question AnsweringCode0
Deep Modular Co-Attention Networks for Visual Question AnsweringCode0
Focal Visual-Text Attention for Visual Question AnsweringCode0
MQA: Answering the Question via Robotic ManipulationCode0
Focal Visual-Text Attention for Memex Question AnsweringCode0
CluMo: Cluster-based Modality Fusion Prompt for Continual Learning in Visual Question AnsweringCode0
FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question AnsweringCode0
Alignment Attention by Matching Key and Query DistributionsCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Modulating early visual processing by languageCode0
Structured Triplet Learning with POS-tag Guided Attention for Visual Question AnsweringCode0
MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal ModelsCode0
Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image RepresentationsCode0
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question AnsweringCode0
ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical ImagesCode0
Probabilistic Embeddings for Frozen Vision-Language Models: Uncertainty Quantification with Gaussian Process Latent Variable ModelsCode0
Mixture-of-Subspaces in Low-Rank AdaptationCode0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring ExpressionsCode0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
CLEVR\_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over ImagesCode0
Active Learning for Visual Question Answering: An Empirical StudyCode0
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge InjectionCode0
Federated Document Visual Question Answering: A Pilot StudyCode0
Show:102550
← PrevPage 18 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified