SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17511800 of 2177 papers

TitleStatusHype
On the Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning Applications0
On the Significance of Question Encoder Sequence Model in the Out-of-Distribution Performance in Visual Question Answering0
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law0
Open-Ended Visual Question Answering by Multi-Modal Domain Adaptation0
Optimizing Explanations by Network Canonization and Hyperparameter Search0
Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns0
Optimus: Accelerating Large-Scale Multi-Modal LLM Training by Bubble Exploitation0
Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models0
ORD: Object Relationship Discovery for Visual Dialogue Generation0
ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation0
Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering0
Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training0
Overcoming Language Priors for Visual Question Answering Based on Knowledge Distillation0
Overcoming Language Priors in Visual Question Answering with Adversarial Regularization0
Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
Weakly Supervised Relative Spatial Reasoning for Visual Question AnsweringCode0
CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific ConceptsCode0
MUTAN: Multimodal Tucker Fusion for Visual Question AnsweringCode0
Focal Visual-Text Attention for Memex Question AnsweringCode0
FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question AnsweringCode0
Answer Questions with Right Image Regions: A Visual Attention Regularization ApproachCode0
NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional GeneralizationCode0
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense EvaluationCode0
X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question AnsweringCode0
Neural Module NetworksCode0
Unleashing the Potentials of Likelihood Composition for Multi-modal Language ModelsCode0
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language UnderstandingCode0
Answering Questions about Data Visualizations using Efficient Bimodal FusionCode0
Structured Attentions for Visual Question AnsweringCode0
Structured Triplet Learning with POS-tag Guided Attention for Visual Question AnsweringCode0
What Can Neural Networks Reason About?Code0
Counting Everyday Objects in Everyday ScenesCode0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Visual Reasoning with Multi-hop Feature ModulationCode0
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask QuestionsCode0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
Unveiling Uncertainty: A Deep Dive into Calibration and Performance of Multimodal Large Language ModelsCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMsCode0
Zero-shot Commonsense Reasoning over Machine ImaginationCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Object Attribute Matters in Visual Question AnsweringCode0
Object-aware Adaptive-Positivity Learning for Audio-Visual Question AnsweringCode0
What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task LearningCode0
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
Show:102550
← PrevPage 36 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified