SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 401425 of 2177 papers

TitleStatusHype
Comprehensive Visual Question Answering on Point Clouds through Compositional Scene ManipulationCode1
Greedy Gradient Ensemble for Robust Visual Question AnsweringCode1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
Location-Free Scene Graph GenerationCode1
Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense CaptionerCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart ProblemsCode1
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQACode1
Multi-modal Auto-regressive Modeling via Visual WordsCode1
Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4VCode1
ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal UnderstandingCode1
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
Bayesian Attention ModulesCode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question AnsweringCode1
ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step VerificationCode1
BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal ModelsCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
Check It Again:Progressive Visual Question Answering via Visual EntailmentCode1
Show:102550
← PrevPage 17 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified