SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 601625 of 2177 papers

TitleStatusHype
Benchmarking Vision Language Models for Cultural Understanding0
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal PerceptionCode2
Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images0
Extracting Training Data from Document-Based VQA Models0
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving0
Large Language Models Understand LayoutCode0
WSI-VQA: Interpreting Whole Slide Images by Generative Visual Question AnsweringCode2
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
MiniGPT-Med: Large Language Model as a General Interface for Radiology DiagnosisCode2
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document UnderstandingCode2
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness0
TokenPacker: Efficient Visual Projector for Multimodal LLMCode3
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review0
STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-AnsweringCode1
MM-Instruct: Generated Visual Instructions for Large Multimodal Model AlignmentCode1
Efficient Large Multi-modal Models via Visual Context CompressionCode2
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Show:102550
← PrevPage 25 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified