SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11011125 of 2177 papers

TitleStatusHype
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
A reinforcement learning approach for VQA validation: an application to diabetic macular edema grading0
Generative Visual Question Answering0
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous drivingCode0
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation0
PAT: Parallel Attention Transformer for Visual Question Answering in Vietnamese0
A scoping review on multimodal deep learning in biomedical images and texts0
MMBench: Is Your Multi-modal Model an All-around Player?Code5
Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology ReportingCode1
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic SurgeryCode1
Emu: Generative Pretraining in MultimodalityCode3
Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text ModelsCode1
GPT4RoI: Instruction Tuning Large Language Model on Region-of-InterestCode2
Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning0
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering0
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
Localized Questions in Medical Visual Question AnsweringCode1
Multimodal Prompt Retrieval for Generative Visual Question AnsweringCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question AnsweringCode0
Shikra: Unleashing Multimodal LLM's Referential Dialogue MagicCode2
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck0
Switch-BERT: Learning to Model Multimodal Interactions by Switching Attention and Input0
Show:102550
← PrevPage 45 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified