SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 101150 of 2177 papers

TitleStatusHype
Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language ModelsCode2
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and ActionCode2
MouSi: Poly-Visual-Expert Vision-Language ModelsCode2
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving ScenarioCode2
MiniGPT-Med: Large Language Model as a General Interface for Radiology DiagnosisCode2
MG-LLaVA: Towards Multi-Granularity Visual Instruction TuningCode2
Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert ReasonerCode2
Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration RateCode2
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMsCode2
Med-Flamingo: a Multimodal Medical Few-shot LearnerCode2
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive DiversityCode2
MedM-VL: What Makes a Good Medical LVLM?Code2
MDETR - Modulated Detection for End-to-End Multi-Modal UnderstandingCode2
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image AnalysisCode2
MTVQA: Benchmarking Multilingual Text-Centric Visual Question AnsweringCode2
Doe-1: Closed-Loop Autonomous Driving with Large World ModelCode2
MedPromptX: Grounded Multimodal Prompting for Chest X-ray DiagnosisCode2
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual ContextsCode2
DreamLLM: Synergistic Multimodal Comprehension and CreationCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document UnderstandingCode2
OneLLM: One Framework to Align All Modalities with LanguageCode2
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context SparsificationCode2
AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLMCode2
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal PerceptionCode2
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision ModelsCode2
MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language UnderstandingCode2
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language ModelsCode2
LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal ModelsCode2
Efficient Large Multi-modal Models via Visual Context CompressionCode2
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal ModelsCode2
LLaVA-Plus: Learning to Use Tools for Creating Multimodal AgentsCode2
LLMGA: Multimodal Large Language Model based Generation AssistantCode2
CoLLaVO: Crayon Large Language and Vision mOdelCode2
LLaMA-VID: An Image is Worth 2 Tokens in Large Language ModelsCode2
LOVA3: Learning to Visual Question Answering, Asking and AssessmentCode2
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsCode2
LingoQA: Visual Question Answering for Autonomous DrivingCode2
Large Continual Instruction AssistantCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
ILLUME+: Illuminating Unified MLLM with Dual Visual Tokenization and Diffusion RefinementCode2
Imp: Highly Capable Large Multimodal Models for Mobile DevicesCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Show:102550
← PrevPage 3 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified