SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 126150 of 2177 papers

TitleStatusHype
MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language UnderstandingCode2
VoxelPrompt: A Vision-Language Agent for Grounded Medical Image AnalysisCode2
Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration RateCode2
TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation DataCode2
Large Continual Instruction AssistantCode2
Phantom of Latent for Large Language and Vision ModelsCode2
One missing piece in Vision and Language: A Survey on Comics UnderstandingCode2
EyeCLIP: A visual-language foundation model for multi-modal ophthalmic image analysisCode2
PA-LLaVA: A Large Language-Vision Assistant for Human Pathology Image UnderstandingCode2
A Survey on Benchmarks of Multimodal Large Language ModelsCode2
GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AICode2
Towards A Generalizable Pathology Foundation Model via Unified Knowledge DistillationCode2
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive DiversityCode2
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal PerceptionCode2
WSI-VQA: Interpreting Whole Slide Images by Generative Visual Question AnsweringCode2
MiniGPT-Med: Large Language Model as a General Interface for Radiology DiagnosisCode2
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document UnderstandingCode2
Efficient Large Multi-modal Models via Visual Context CompressionCode2
MG-LLaVA: Towards Multi-Granularity Visual Instruction TuningCode2
TroL: Traversal of Layers for Large Language and Vision ModelsCode2
VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image UnderstandingCode2
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMsCode2
Explore the Limits of Omni-modal Pretraining at ScaleCode2
Yo'LLaVA: Your Personalized Language and Vision AssistantCode2
Towards Vision-Language Geo-Foundation Model: A SurveyCode2
Show:102550
← PrevPage 6 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified