SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10261050 of 2177 papers

TitleStatusHype
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question AnsweringCode0
Frozen Transformers in Language Models Are Effective Visual Encoder LayersCode2
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
Enhancing BERT-Based Visual Question Answering through Keyword-Driven Sentence Selection0
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
Open-Set Knowledge-Based Visual Question Answering with Inference PathsCode0
Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning0
Jaeger: A Concatenation-Based Multi-Transformer VQA Model0
Improving mitosis detection on histopathology images using large vision-language models0
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded DialogCode0
Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 20230
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsCode1
Causal Reasoning through Two Layers of Cognition for Improving Generalization in Visual Question Answering0
Lightweight In-Context Tuning for Multimodal Unified Models0
Improved Baselines with Visual Instruction TuningCode6
On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study0
Improving Automatic VQA Evaluation Using Large Language Models0
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual ContextsCode2
SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering0
Human Mobility Question Answering (Vision Paper)0
Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question AnsweringCode2
Toloka Visual Question Answering BenchmarkCode1
Show:102550
← PrevPage 42 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified