SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 921930 of 2177 papers

TitleStatusHype
On the Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning Applications0
Towards a Unified Multimodal Reasoning FrameworkCode0
DriveLM: Driving with Graph Visual Question AnsweringCode3
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Reducing Hallucinations: Enhancing VQA for Flood Disaster Damage Assessment with Visual Contexts0
VCoder: Versatile Vision Encoders for Multimodal Large Language ModelsCode2
V*: Guided Visual Search as a Core Mechanism in Multimodal LLMsCode2
LingoQA: Visual Question Answering for Autonomous DrivingCode2
Object Attribute Matters in Visual Question AnsweringCode0
Interactive Visual Task Learning for Robots0
Show:102550
← PrevPage 93 of 218Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified