SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 2650 of 2177 papers

TitleStatusHype
Provoking Multi-modal Few-Shot LVLM via Exploration-Exploitation In-Context Learning0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
Outside Knowledge Conversational Video (OKCV) Dataset -- Dialoguing over VideosCode0
FlagEvalMM: A Flexible Framework for Comprehensive Multimodal Model EvaluationCode2
An Open-Source Software Toolkit & Benchmark Suite for the Evaluation and Adaptation of Multimodal Action Models0
PhyBlock: A Progressive Benchmark for Physical Understanding and Planning via 3D Block Assembly0
HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific DomainsCode0
Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning0
Multi-Step Visual Reasoning with Visual Tokens Scaling and VerificationCode1
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning0
Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering0
Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems0
TextVidBench: A Benchmark for Long Video Scene Text Understanding0
ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding0
Hanfu-Bench: A Multimodal Benchmark on Cross-Temporal Cultural Understanding and Transcreation0
Learning Sparsity for Effective and Efficient Music Performance Question Answering0
Fast or Slow? Integrating Fast Intuition and Deliberate Thinking for Enhancing Visual Question Answering0
MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility0
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD SoftwareCode1
Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are the Bottleneck0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
mRAG: Elucidating the Design Space of Multi-modal Retrieval-Augmented Generation0
QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without RetrainingCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
Show:102550
← PrevPage 2 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified