SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 176200 of 2177 papers

TitleStatusHype
A Vision Centric Remote Sensing Benchmark0
UMIT: Unifying Medical Imaging Tasks via Vision-Language ModelsCode0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models0
GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided Feedback0
TruthLens:A Training-Free Paradigm for DeepFake Detection0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Where do Large Vision-Language Models Look at when Answering Questions?Code2
NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language ModelsCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
Task-Oriented Feature Compression for Multimodal Understanding via Device-Edge Co-Inference0
From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for Multimodal Large Language Models0
DynRsl-VLM: Enhancing Autonomous Driving Perception with Dynamic Resolution Vision-Language Models0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario UnderstandingCode2
On the Limitations of Vision-Language Models in Understanding Image Transforms0
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery0
SimLingo: Vision-Only Closed-Loop Autonomous Driving with Language-Action AlignmentCode3
Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework0
From Text to Visuals: Using LLMs to Generate Math Diagrams with Vector Graphics0
Robusto-1 Dataset: Comparing Humans and VLMs on real out-of-distribution Autonomous Driving VQA from Peru0
TI-JEPA: An Innovative Energy-based Joint Embedding Strategy for Text-Image Multimodal Systems0
Show:102550
← PrevPage 8 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified