SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 801825 of 2177 papers

TitleStatusHype
A dataset of clinically generated visual questions and answers about radiology images0
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images0
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?0
Human-centered Interactive Learning via MLLMs for Text-to-Image Person Re-identification0
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
Hyperbolic Attention Networks0
Hyper-dimensional computing for a visual question-answering system that is trainable end-to-end0
JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems0
Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning0
CREPE: Coordinate-Aware End-to-End Document Parser0
Hadamard product in deep learning: Introduction, Advances and Challenges0
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent0
CQ-VQA: Visual Question Answering on Categorized Questions0
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
i-Code Studio: A Configurable and Composable Framework for Integrative AI0
Knowing Where to Look? Analysis on Attention of Visual Question Answering System0
Language Features Matter: Effective Language Representations for Vision-Language Tasks0
Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights0
A Causal Approach to Mitigate Modality Preference Bias in Medical Visual Question Answering0
Large Vision-Language Models for Remote Sensing Visual Question Answering0
Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision0
H2OVL-Mississippi Vision Language Models Technical Report0
Show:102550
← PrevPage 33 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified