SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 51100 of 2177 papers

TitleStatusHype
Flamingo: a Visual Language Model for Few-Shot LearningCode4
SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language ModelsCode4
MIMIC-IT: Multi-Modal In-Context Instruction TuningCode4
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language ModelsCode4
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive SurveyCode3
InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic ChartsCode3
Vision-Language Models for Medical Report Generation and Visual Question Answering: A ReviewCode3
Vision-Language Pre-training: Basics, Recent Advances, and Future TrendsCode3
VisionZip: Longer is Better but Not Necessary in Vision Language ModelsCode3
VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language ModelCode3
Champion Solution for the WSDM2023 Toloka VQA ChallengeCode3
LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language ModelCode3
Vary: Scaling up the Vision Vocabulary for Large Vision-Language ModelsCode3
View Selection for 3D Captioning via Diffusion RankingCode3
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of contextCode3
TinyGPT-V: Efficient Multimodal Large Language Model via Small BackbonesCode3
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object ManipulationCode3
Generative Multimodal Models are In-Context LearnersCode3
SimLingo: Vision-Only Closed-Loop Autonomous Driving with Language-Action AlignmentCode3
All You May Need for VQA are Image CaptionsCode3
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language ModelsCode3
ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language ModelsCode3
Bilinear Attention NetworksCode3
Baichuan-Omni Technical ReportCode3
Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning AgentCode3
Emu: Generative Pretraining in MultimodalityCode3
TokenPacker: Efficient Visual Projector for Multimodal LLMCode3
Efficient Multimodal Large Language Models: A SurveyCode3
Emu3: Next-Token Prediction is All You NeedCode3
PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal RetrieversCode3
Evaluating Text-to-Visual Generation with Image-to-Text GenerationCode3
DriveLM: Driving with Graph Visual Question AnsweringCode3
MoAI: Mixture of All Intelligence for Large Language and Vision ModelsCode3
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMsCode3
MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated CapabilitiesCode3
Lyra: An Efficient and Speech-Centric Framework for Omni-CognitionCode3
Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language ModelsCode3
M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language ModelsCode3
SkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation ImageryCode3
Q-Bench+: A Benchmark for Multi-modal Foundation Models on Low-level Vision from Single Images to PairsCode3
Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal ModelsCode3
LLaMA-VID: An Image is Worth 2 Tokens in Large Language ModelsCode2
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsCode2
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
Large Continual Instruction AssistantCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
LingoQA: Visual Question Answering for Autonomous DrivingCode2
Imp: Highly Capable Large Multimodal Models for Mobile DevicesCode2
Show:102550
← PrevPage 2 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified