SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 151175 of 2177 papers

TitleStatusHype
KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean LanguageCode0
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action ModelCode4
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
JEEM: Vision-Language Understanding in Four Arabic Dialects0
CTRL-O: Language-Controllable Object-Centric Visual Representation Learning0
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Mitigating Low-Level Visual Hallucinations Requires Self-Awareness: Database, Model and Training Strategy0
Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile Gaussian Feature Fields0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language ModelsCode1
VGAT: A Cancer Survival Analysis Framework Transitioning from Generative Visual Question Answering to Genomic ReconstructionCode0
ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation0
PAVE: Patching and Adapting Video Large Language ModelsCode1
Improved Alignment of Modalities in Large Vision Language Models0
Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image AnalysisCode2
Where is this coming from? Making groundedness count in the evaluation of Document VQA models0
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels0
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical StudyCode0
Show:102550
← PrevPage 7 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified