SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 150 of 2177 papers

TitleStatusHype
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Evaluating Attribute Confusion in Fashion Text-to-Image Generation0
Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling0
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding0
Revisiting CroPA: A Reproducibility Study and Enhancements for Cross-Prompt Adversarial Transferability in Vision-Language ModelsCode0
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
FOCUS: Internal MLLM Representations for Efficient Fine-Grained Visual Question Answering0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
Semantic-enhanced Modality-asymmetric Retrieval for Online E-commerce Search0
GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning0
Scene-R1: Video-Grounded Large Language Models for 3D Scene Reasoning without 3D Annotations0
Can Common VLMs Rival Medical VLMs? Evaluation and Strategic Insights0
MEGC2025: Micro-Expression Grand Challenge on Spot Then Recognize and Visual Question Answering0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
SimpleDoc: Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative RefinementCode1
CAPO: Reinforcing Consistent Reasoning in Medical Decision-Making0
AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making0
MTabVQA: Evaluating Multi-Tabular Reasoning of Language Models in Visual Space0
A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
SlotPi: Physics-informed Object-centric Reasoning ModelsCode0
Provoking Multi-modal Few-Shot LVLM via Exploration-Exploitation In-Context Learning0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
Outside Knowledge Conversational Video (OKCV) Dataset -- Dialoguing over VideosCode0
FlagEvalMM: A Flexible Framework for Comprehensive Multimodal Model EvaluationCode2
An Open-Source Software Toolkit & Benchmark Suite for the Evaluation and Adaptation of Multimodal Action Models0
PhyBlock: A Progressive Benchmark for Physical Understanding and Planning via 3D Block Assembly0
HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific DomainsCode0
Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning0
Multi-Step Visual Reasoning with Visual Tokens Scaling and VerificationCode1
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning0
Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering0
Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems0
TextVidBench: A Benchmark for Long Video Scene Text Understanding0
ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding0
Hanfu-Bench: A Multimodal Benchmark on Cross-Temporal Cultural Understanding and Transcreation0
Learning Sparsity for Effective and Efficient Music Performance Question Answering0
Fast or Slow? Integrating Fast Intuition and Deliberate Thinking for Enhancing Visual Question Answering0
MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility0
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD SoftwareCode1
Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are the Bottleneck0
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models0
mRAG: Elucidating the Design Space of Multi-modal Retrieval-Augmented Generation0
QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without RetrainingCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
Show:102550
← PrevPage 1 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified