SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 351400 of 2177 papers

TitleStatusHype
A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMsCode1
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
Copy-Move Forgery Detection and Question Answering for Remote Sensing ImageCode0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Understanding the World's Museums through Vision-Language ReasoningCode0
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context SparsificationCode2
DLaVA: Document Language and Vision Assistant for Answer Localization with Enhanced Interpretability and TrustworthinessCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection & Grounding in VLMs0
ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?0
Active Data Curation Effectively Distills Large-Scale Multimodal Models0
Cross-modal Information Flow in Multimodal Large Language ModelsCode1
Efficient Multi-modal Large Language Models via Visual Token Grouping0
Task Progressive Curriculum Learning for Robust Visual Question Answering0
Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question AnsweringCode2
Grounding-IQA: Multimodal Language Grounding Model for Image Quality AssessmentCode2
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey0
GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis0
Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question AnsweringCode2
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image ExplorationCode2
Text-Guided Coarse-to-Fine Fusion Network for Robust Remote Sensing Visual Question Answering0
FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity0
freePruner: A Training-free Approach for Large Multimodal Model Acceleration0
ReWind: Understanding Long Videos with Instructed Learnable Memory0
GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AICode2
FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression0
Visual Contexts Clarify Ambiguous Expressions: A Benchmark DatasetCode0
Looking Beyond Text: Reducing Language bias in Large Vision-Language Models via Multimodal Dual-Attention and Soft-Image Guidance0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
Teaching VLMs to Localize Specific Objects from In-context ExamplesCode1
LaVida Drive: Vision-Text Interaction VLM for Autonomous Driving with Token Selection, Recovery and Enhancement0
Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model0
A Survey of Medical Vision-and-Language Applications and Their TechniquesCode1
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
MC-LLaVA: Multi-Concept Personalized Vision-Language ModelCode2
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
A Comprehensive Survey on Visual Question Answering Datasets and Algorithms0
Large Vision-Language Models for Remote Sensing Visual Question Answering0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference0
LLaVA-CoT: Let Vision Language Models Reason Step-by-StepCode7
Visual question answering based evaluation metrics for text-to-image generation0
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and GenerationCode11
SparrowVQE: Visual Question Explanation for Course Content UnderstandingCode0
Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent0
Show:102550
← PrevPage 8 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified