SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 551600 of 2177 papers

TitleStatusHype
Show-o: One Single Transformer to Unify Multimodal Understanding and GenerationCode5
Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework0
CluMo: Cluster-based Modality Fusion Prompt for Continual Learning in Visual Question AnsweringCode0
SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs0
V-RoAst: Visual Road Assessment. Can VLM be a Road Safety Assessor Using the iRAP Standard?Code1
TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and CompetitionCode0
PA-LLaVA: A Large Language-Vision Assistant for Human Pathology Image UnderstandingCode2
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge InjectionCode0
Beyond the Hype: A dispassionate look at vision-language models in medical scenario0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
A Survey on Benchmarks of Multimodal Large Language ModelsCode2
Visual Agents as Fast and Slow ThinkersCode1
IIU: Independent Inference Units for Knowledge-based Visual Question AnsweringCode0
Enhancing Visual Question Answering through Ranking-Based Hybrid Training and Multimodal Fusion0
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
SWIFT:A Scalable lightWeight Infrastructure for Fine-TuningCode11
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language ModelsCode7
Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic SurgeryCode1
Revisiting Multi-Modal LLM Evaluation0
Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models0
Optimus: Accelerating Large-Scale Multi-Modal LLM Training by Bubble Exploitation0
GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AICode2
Targeted Visual Prompting for Medical Visual Question AnsweringCode0
LLaVA-OneVision: Easy Visual Task TransferCode0
Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative PretrainingCode7
MMPKUBase: A Comprehensive and High-quality Chinese Multi-modal Knowledge Graph0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated CapabilitiesCode3
SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
Boosting Audio Visual Question Answering via Key Semantic-Aware CuesCode1
Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Tasks0
AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering0
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
VILA^2: VILA Augmented VILA0
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
Imperfect Vision Encoders: Efficient and Robust Tuning for Vision-Language Models0
Learning Trimodal Relation for AVQA with Missing ModalityCode1
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language ModelsCode0
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive DiversityCode2
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person ViewCode0
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack BenchmarkCode1
Multimodal Reranking for Knowledge-Intensive Visual Question Answering0
ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data0
EchoSight: Advancing Visual-Language Models with Wiki Knowledge0
TM-PATHVQA:90000+ Textless Multilingual Questions for Medical Visual Question Answering0
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified