SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 12511300 of 2177 papers

TitleStatusHype
Looking Beyond Text: Reducing Language bias in Large Vision-Language Models via Multimodal Dual-Attention and Soft-Image Guidance0
Look, Learn and Leverage (L^3): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment0
Look, Read and Ask: Learning to Ask Questions by Reading Text in Images0
LRRA:A Transparent Neural-Symbolic Reasoning Framework for Real-World Visual Question Answering0
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval0
LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation0
M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
Making the Most of What You Have: Adapting Pre-trained Visual Language Models in the Low-data Regime0
MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning0
MANGO: Enhancing the Robustness of VQA Models via Adversarial Noise Generation0
Mask4Align: Aligned Entity Prompting with Color Masks for Multi-Entity Localization Problems0
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering0
MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model0
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models0
Measuring CLEVRness: Blackbox testing of Visual Reasoning Models0
Measuring Machine Intelligence Through Visual Question Answering0
Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model0
Medical Visual Question Answering: A Survey0
Medical visual question answering using joint self-supervised learning0
MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility0
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning0
MedXChat: A Unified Multimodal Large Language Model Framework towards CXRs Understanding and Generation0
MEGC2025: Micro-Expression Grand Challenge on Spot Then Recognize and Visual Question Answering0
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
Memory Augmented Neural Networks for Natural Language Processing0
Merlin:Empowering Multimodal LLMs with Foresight Minds0
Meta-Adaptive Prompt Distillation for Few-Shot Visual Question Answering0
MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification0
MF2-MVQA: A Multi-stage Feature Fusion method for Medical Visual Question Answering0
MGA-VQA: Multi-Granularity Alignment for Visual Question Answering0
MIMOQA: Multimodal Input Multimodal Output Question Answering0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
Mindstorms in Natural Language-Based Societies of Mind0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
Mitigating Low-Level Visual Hallucinations Requires Self-Awareness: Database, Model and Training Strategy0
Data-augmented phrase-level alignment for mitigating object hallucination0
Mitigating the Impact of Attribute Editing on Face Recognition0
MIVC: Multiple Instance Visual Component for Visual-Language Models0
Mixture of Rationale: Multi-Modal Reasoning Mixture for Visual Question Answering0
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning0
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training0
MMAR: Towards Lossless Multi-Modal Auto-Regressive Probabilistic Modeling0
MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models0
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning0
MMED: A Multi-domain and Multi-modality Event Dataset0
MME-Finance: A Multimodal Finance Benchmark for Expert-level Understanding and Reasoning0
Show:102550
← PrevPage 26 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified