SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 2650 of 2177 papers

TitleStatusHype
MMBench: Is Your Multi-modal Model an All-around Player?Code5
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of ExpertsCode5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
CogAgent: A Visual Language Model for GUI AgentsCode5
CogVLM: Visual Expert for Pretrained Language ModelsCode5
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language TasksCode5
Show-o: One Single Transformer to Unify Multimodal Understanding and GenerationCode5
Wings: Learning Multimodal LLMs without Text-only ForgettingCode5
The All-Seeing Project V2: Towards General Relation Comprehension of the Open WorldCode4
MIMIC-IT: Multi-Modal In-Context Instruction TuningCode4
TinyLLaVA: A Framework of Small-scale Large Multimodal ModelsCode4
SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language ModelsCode4
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
A Survey on Vision-Language-Action Models for Embodied AICode4
GPT-4V(ision) is a Generalist Web Agent, if GroundedCode4
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
Flamingo: a Visual Language Model for Few-Shot LearningCode4
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language ModelsCode4
Otter: A Multi-Modal Model with In-Context Instruction TuningCode4
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action ModelCode4
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language ModelsCode4
OtterHD: A High-Resolution Multi-modality ModelCode4
mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality CollaborationCode4
OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving with Counterfactual ReasoningCode4
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and VideoCode4
Show:102550
← PrevPage 2 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified