SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 601650 of 2177 papers

TitleStatusHype
Multimodal fusion of imaging and genomics for lung cancer recurrence predictionCode1
Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual FeaturesCode1
In Defense of Grid Features for Visual Question AnsweringCode1
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art BaselineCode1
Overcoming Data Limitation in Medical Visual Question AnsweringCode1
UNITER: UNiversal Image-TExt Representation LearningCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
VL-BERT: Pre-training of Generic Visual-Linguistic RepresentationsCode1
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language TasksCode1
Scene Text Visual Question AnsweringCode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
Gated Hierarchical Attention for Image CaptioningCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question AnsweringCode1
AI2-THOR: An Interactive 3D Environment for Visual AICode1
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environmentsCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
Learning Cooperative Visual Dialog Agents with Deep Reinforcement LearningCode1
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual ReasoningCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
VQA: Visual Question AnsweringCode1
Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
Evaluating Attribute Confusion in Fashion Text-to-Image Generation0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling0
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding0
Revisiting CroPA: A Reproducibility Study and Enhancements for Cross-Prompt Adversarial Transferability in Vision-Language ModelsCode0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
FOCUS: Internal MLLM Representations for Efficient Fine-Grained Visual Question Answering0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
Semantic-enhanced Modality-asymmetric Retrieval for Online E-commerce Search0
GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning0
Scene-R1: Video-Grounded Large Language Models for 3D Scene Reasoning without 3D Annotations0
Can Common VLMs Rival Medical VLMs? Evaluation and Strategic Insights0
MEGC2025: Micro-Expression Grand Challenge on Spot Then Recognize and Visual Question Answering0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
CAPO: Reinforcing Consistent Reasoning in Medical Decision-Making0
AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making0
MTabVQA: Evaluating Multi-Tabular Reasoning of Language Models in Visual Space0
A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
SlotPi: Physics-informed Object-centric Reasoning ModelsCode0
Provoking Multi-modal Few-Shot LVLM via Exploration-Exploitation In-Context Learning0
Outside Knowledge Conversational Video (OKCV) Dataset -- Dialoguing over VideosCode0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
An Open-Source Software Toolkit & Benchmark Suite for the Evaluation and Adaptation of Multimodal Action Models0
Show:102550
← PrevPage 13 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified