SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 351375 of 2177 papers

TitleStatusHype
I2I: Initializing Adapters with Improvised KnowledgeCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Attention in Reasoning: Dataset, Analysis, and ModelingCode1
A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMsCode1
Label-Descriptive Patterns and Their Application to Characterizing Classification ErrorsCode1
LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text InjectionCode1
Language-Informed Visual Concept LearningCode1
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQACode1
ConceptBert: Concept-Aware Representation for Visual Question AnsweringCode1
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual ConceptsCode1
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairsCode1
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-AttentionCode1
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
Consistency-preserving Visual Question Answering in Medical ImagingCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of PneumothoraxCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Cross-modal Retrieval for Knowledge-based Visual Question AnsweringCode1
Contrast and Classify: Training Robust VQA ModelsCode1
A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQACode1
A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network CalibrationCode1
CLIP-Guided Vision-Language Pre-training for Question Answering in 3D ScenesCode1
Cross-modal Information Flow in Multimodal Large Language ModelsCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
Show:102550
← PrevPage 15 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified