SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11011150 of 2177 papers

TitleStatusHype
MANGO: Enhancing the Robustness of VQA Models via Adversarial Noise Generation0
Explicit Knowledge-based Reasoning for Visual Question Answering0
Video Question Answering via Attribute-Augmented Attention Network Learning0
Explicit Bias Discovery in Visual Question Answering Models0
Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA0
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
Anatomy Might Be All You Need: Forecasting What to Do During Surgery0
Mask4Align: Aligned Entity Prompting with Color Masks for Multi-Entity Localization Problems0
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models0
MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model0
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE0
Evaluation and Enhancement of Semantic Grounding in Large Vision-Language Models0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models0
VILA^2: VILA Augmented VILA0
Measuring Machine Intelligence Through Visual Question Answering0
Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model0
Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks0
Evaluating the Representational Hub of Language and Vision Models0
Evaluating the Capabilities of Multi-modal Reasoning Models with Synthetic Task Data0
Evaluating Attribute Confusion in Fashion Text-to-Image Generation0
Estimating semantic structure for the VQA answer space0
ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation0
An Analysis of Visual Question Answering Algorithms0
Medical Visual Question Answering: A Survey0
Medical visual question answering using joint self-supervised learning0
ErgoChat: a Visual Query System for the Ergonomic Risk Assessment of Construction Workers0
Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering0
Enhancing Visual Question Answering through Ranking-Based Hybrid Training and Multimodal Fusion0
MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility0
Analysis on Image Set Visual Question Answering0
Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling0
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning0
MedXChat: A Unified Multimodal Large Language Model Framework towards CXRs Understanding and Generation0
MEGC2025: Micro-Expression Grand Challenge on Spot Then Recognize and Visual Question Answering0
Enhancing SAM with Efficient Prompting and Preference Optimization for Semi-supervised Medical Image Segmentation0
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
Memory Augmented Neural Networks for Natural Language Processing0
Merlin:Empowering Multimodal LLMs with Foresight Minds0
Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering0
MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification0
From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information0
MF2-MVQA: A Multi-stage Feature Fusion method for Medical Visual Question Answering0
Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns0
MGA-VQA: Multi-Granularity Alignment for Visual Question Answering0
Show:102550
← PrevPage 23 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified