SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 12011250 of 2177 papers

TitleStatusHype
EchoSight: Advancing Visual-Language Models with Wiki Knowledge0
MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering0
MoColl: Agent-Based Specific and General Model Collaboration for Image Captioning0
Modality-Aware Integration with Large Language Models for Knowledge-based Visual Question Answering0
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use0
Modeling Coreference Relations in Visual Dialog0
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models0
Modern Question Answering Datasets and Benchmarks: A Survey0
Modular Graph Attention Network for Complex Visual Relational Reasoning0
Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering0
EBMs vs. CL: Exploring Self-Supervised Visual Pretraining for Visual Question Answering0
Modulated Self-attention Convolutional Network for VQA0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
eaVQA: An Experimental Analysis on Visual Question Answering Models0
MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering0
MOKA: Open-World Robotic Manipulation through Mark-Based Visual Prompting0
EACO: Enhancing Alignment in Multimodal LLMs via Critical Observation0
Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training0
MOSMOS: Multi-organ segmentation facilitated by medical report supervision0
E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model0
MPDrive: Improving Spatial Understanding with Marker-Based Prompt Learning for Autonomous Driving0
DynRsl-VLM: Enhancing Autonomous Driving Perception with Dynamic Resolution Vision-Language Models0
Dynamic Knowledge Integration for Enhanced Vision-Language Reasoning0
Dynamic Fusion With Intra- and Inter-Modality Attention Flow for Visual Question Answering0
Dynamic Fusion with Intra- and Inter- Modality Attention Flow for Visual Question Answering0
Vision and Language: from Visual Perception to Content Creation0
mRAG: Elucidating the Design Space of Multi-modal Retrieval-Augmented Generation0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
MTabVQA: Evaluating Multi-Tabular Reasoning of Language Models in Visual Space0
DUBLIN -- Document Understanding By Language-Image Network0
Mucko: Multi-Layer Cross-Modal Knowledge Reasoning for Fact-based Visual Question Answering0
Muffin or Chihuahua? Challenging Multimodal Large Language Models with Multipanel VQA0
DualNet: Domain-Invariant Network for Visual Question Answering0
Multi-Agents Based on Large Language Models for Knowledge-based Visual Question Answering0
Dual Capsule Attention Mask Network with Mutual Learning for Visual Question Answering0
DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback0
Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes0
Multi-Clue Reasoning with Memory Augmentation for Knowledge-based Visual Question Answering0
Double Visual Defense: Adversarial Pre-training and Instruction Tuning for Improving Vision-Language Model Robustness0
Multi-grained Attention with Object-level Grounding for Visual Question Answering0
A Multimodal Social Agent0
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning0
Multi-Layer Content Interaction Through Quaternion Product For Visual Question Answering0
Multi-Level Attention Networks for Visual Question Answering0
Multilingual Augmentation for Robust Visual Question Answering in Remote Sensing Images0
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks0
Domain-robust VQA with diverse datasets and methods but no target labels0
Domain Adaptation of VLM for Soccer Video Understanding0
Do Explanations make VQA Models more Predictable to a Human?0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Show:102550
← PrevPage 25 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified