SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11211130 of 2177 papers

TitleStatusHype
Shikra: Unleashing Multimodal LLM's Referential Dialogue MagicCode2
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck0
Switch-BERT: Learning to Model Multimodal Interactions by Switching Attention and Input0
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible AdapterCode0
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories0
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Improving Selective Visual Question Answering by Learning from Your PeersCode1
Show:102550
← PrevPage 113 of 218Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified