SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 401450 of 2177 papers

TitleStatusHype
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network CalibrationCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
Benchmarking Multimodal Mathematical Reasoning with Explicit Visual DependencyCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Gated Hierarchical Attention for Image CaptioningCode1
CLIP-Guided Vision-Language Pre-training for Question Answering in 3D ScenesCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Multi-Scale Attention for Audio Question AnsweringCode1
Multi-Step Visual Reasoning with Visual Tokens Scaling and VerificationCode1
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
CLEVR-Math: A Dataset for Compositional Language, Visual and Mathematical ReasoningCode1
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal ModelsCode1
MDETR -- Modulated Detection for End-to-End Multi-Modal UnderstandingCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM IntegrationCode1
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual ReasoningCode1
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
MapQA: A Dataset for Question Answering on Choropleth MapsCode1
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
Many Heads but One Brain: Fusion Brain -- a Competition and a Single Multimodal Multitask ArchitectureCode1
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot PromptingCode1
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Evaluating Multimodal Representations on Visual Semantic Textual SimilarityCode1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Pano-AVQA: Grounded Audio-Visual Question Answering on 360deg VideosCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal UnderstandingCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga UnderstandingCode1
Defeasible Visual Entailment: Benchmark, Evaluator, and Reward-Driven OptimizationCode1
Beyond Embeddings: The Promise of Visual Table in Visual ReasoningCode1
MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training ModelCode1
ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step VerificationCode1
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Are Bias Mitigation Techniques for Deep Learning Effective?Code1
Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question AnsweringCode1
Check It Again:Progressive Visual Question Answering via Visual EntailmentCode1
Show:102550
← PrevPage 9 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified