SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 101125 of 2177 papers

TitleStatusHype
Where do Large Vision-Language Models Look at when Answering Questions?Code2
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario UnderstandingCode2
AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLMCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language ModelsCode2
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference OptimizationCode2
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language ModelsCode2
A Simple Aerial Detection Baseline of Multimodal Language ModelsCode2
Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal UnderstandingCode2
Dual Diffusion for Unified Image Generation and UnderstandingCode2
AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous DrivingCode2
Doe-1: Closed-Loop Autonomous Driving with Large World ModelCode2
Towards a Multimodal Large Language Model with Pixel-Level Insight for BiomedicineCode2
BiMediX2: Bio-Medical EXpert LMM for Diverse Medical ModalitiesCode2
TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-ActionCode2
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual CompressionCode2
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context SparsificationCode2
Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question AnsweringCode2
Grounding-IQA: Multimodal Language Grounding Model for Image Quality AssessmentCode2
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image ExplorationCode2
Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question AnsweringCode2
GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AICode2
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
VQA^2: Visual Question Answering for Video Quality AssessmentCode2
Show:102550
← PrevPage 5 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified