SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 326350 of 2177 papers

TitleStatusHype
Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMsCode1
Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic ReasoningCode1
LIVE: Learnable In-Context Vector for Visual Question AnsweringCode1
Coarse-to-Fine Reasoning for Visual Question AnsweringCode1
Coarse-to-Fine Vision-Language Pre-training with Fusion in the BackboneCode1
A Survey of Medical Vision-and-Language Applications and Their TechniquesCode1
Learning Cooperative Visual Dialog Agents with Deep Reinforcement LearningCode1
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic SurgeryCode1
COBRA: Contrastive Bi-Modal Representation AlgorithmCode1
A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMsCode1
Dual-Key Multimodal Backdoors for Visual Question AnsweringCode1
Learning Situation Hyper-Graphs for Video Question AnsweringCode1
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-AttentionCode1
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network CalibrationCode1
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge AlignmentCode1
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art BaselineCode1
CLIP-Guided Vision-Language Pre-training for Question Answering in 3D ScenesCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
LaTr: Layout-Aware Transformer for Scene-Text VQACode1
Learning to Answer Questions in Dynamic Audio-Visual ScenariosCode1
LIME: Less Is More for MLLM EvaluationCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQACode1
Show:102550
← PrevPage 14 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified