SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 751775 of 2177 papers

TitleStatusHype
Mitigating Low-Level Visual Hallucinations Requires Self-Awareness: Database, Model and Training Strategy0
Improved Alignment of Modalities in Large Vision Language Models0
LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?0
ORION: A Holistic End-to-End Autonomous Driving Framework by Vision-Language Instructed Action Generation0
VGAT: A Cancer Survival Analysis Framework Transitioning from Generative Visual Question Answering to Genomic ReconstructionCode0
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels0
Where is this coming from? Making groundedness count in the evaluation of Document VQA models0
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical StudyCode0
UMIT: Unifying Medical Imaging Tasks via Vision-Language ModelsCode0
A Vision Centric Remote Sensing Benchmark0
GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided Feedback0
TruthLens:A Training-Free Paradigm for DeepFake Detection0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Task-Oriented Feature Compression for Multimodal Understanding via Device-Edge Co-Inference0
From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration0
PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for Multimodal Large Language Models0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
DynRsl-VLM: Enhancing Autonomous Driving Perception with Dynamic Resolution Vision-Language Models0
SurgicalVLM-Agent: Towards an Interactive AI Co-Pilot for Pituitary Surgery0
Show:102550
← PrevPage 31 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified