SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 151200 of 2177 papers

TitleStatusHype
KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean LanguageCode0
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action ModelCode4
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
JEEM: Vision-Language Understanding in Four Arabic Dialects0
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
CTRL-O: Language-Controllable Object-Centric Visual Representation Learning0
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Mitigating Low-Level Visual Hallucinations Requires Self-Awareness: Database, Model and Training Strategy0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile Gaussian Feature Fields0
VGAT: A Cancer Survival Analysis Framework Transitioning from Generative Visual Question Answering to Genomic ReconstructionCode0
Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language ModelsCode1
PAVE: Patching and Adapting Video Large Language ModelsCode1
Improved Alignment of Modalities in Large Vision Language Models0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation0
Med3DVLM: An Efficient Vision-Language Model for 3D Medical Image AnalysisCode2
Where is this coming from? Making groundedness count in the evaluation of Document VQA models0
DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels0
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical StudyCode0
A Vision Centric Remote Sensing Benchmark0
UMIT: Unifying Medical Imaging Tasks via Vision-Language ModelsCode0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models0
GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided Feedback0
TruthLens:A Training-Free Paradigm for DeepFake Detection0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Where do Large Vision-Language Models Look at when Answering Questions?Code2
NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language ModelsCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
Task-Oriented Feature Compression for Multimodal Understanding via Device-Edge Co-Inference0
From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for Multimodal Large Language Models0
DynRsl-VLM: Enhancing Autonomous Driving Perception with Dynamic Resolution Vision-Language Models0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario UnderstandingCode2
On the Limitations of Vision-Language Models in Understanding Image Transforms0
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery0
SimLingo: Vision-Only Closed-Loop Autonomous Driving with Language-Action AlignmentCode3
Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework0
From Text to Visuals: Using LLMs to Generate Math Diagrams with Vector Graphics0
Robusto-1 Dataset: Comparing Humans and VLMs on real out-of-distribution Autonomous Driving VQA from Peru0
TI-JEPA: An Innovative Energy-based Joint Embedding Strategy for Text-Image Multimodal Systems0
Show:102550
← PrevPage 4 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified