SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17761800 of 2177 papers

TitleStatusHype
Neural Module NetworksCode0
Unleashing the Potentials of Likelihood Composition for Multi-modal Language ModelsCode0
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language UnderstandingCode0
Answering Questions about Data Visualizations using Efficient Bimodal FusionCode0
Structured Attentions for Visual Question AnsweringCode0
Structured Triplet Learning with POS-tag Guided Attention for Visual Question AnsweringCode0
What Can Neural Networks Reason About?Code0
Counting Everyday Objects in Everyday ScenesCode0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Visual Reasoning with Multi-hop Feature ModulationCode0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
Unveiling Uncertainty: A Deep Dive into Calibration and Performance of Multimodal Large Language ModelsCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMsCode0
Zero-shot Commonsense Reasoning over Machine ImaginationCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Object Attribute Matters in Visual Question AnsweringCode0
Object-aware Adaptive-Positivity Learning for Audio-Visual Question AnsweringCode0
What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task LearningCode0
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
Show:102550
← PrevPage 72 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified