SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 376400 of 2177 papers

TitleStatusHype
GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AICode2
FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression0
Visual Contexts Clarify Ambiguous Expressions: A Benchmark DatasetCode0
Looking Beyond Text: Reducing Language bias in Large Vision-Language Models via Multimodal Dual-Attention and Soft-Image Guidance0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
Teaching VLMs to Localize Specific Objects from In-context ExamplesCode1
LaVida Drive: Vision-Text Interaction VLM for Autonomous Driving with Token Selection, Recovery and Enhancement0
Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model0
A Survey of Medical Vision-and-Language Applications and Their TechniquesCode1
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
A Comprehensive Survey on Visual Question Answering Datasets and Algorithms0
Large Vision-Language Models for Remote Sensing Visual Question Answering0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference0
LLaVA-CoT: Let Vision Language Models Reason Step-by-StepCode7
Visual question answering based evaluation metrics for text-to-image generation0
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and GenerationCode11
SparrowVQE: Visual Question Explanation for Course Content UnderstandingCode0
Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent0
Show:102550
← PrevPage 16 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified