SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 901950 of 2177 papers

TitleStatusHype
Active Data Curation Effectively Distills Large-Scale Multimodal Models0
ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?0
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey0
Task Progressive Curriculum Learning for Robust Visual Question Answering0
Efficient Multi-modal Large Language Models via Visual Token Grouping0
GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis0
Text-Guided Coarse-to-Fine Fusion Network for Robust Remote Sensing Visual Question Answering0
freePruner: A Training-free Approach for Large Multimodal Model Acceleration0
FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity0
ReWind: Understanding Long Videos with Instructed Learnable Memory0
Looking Beyond Text: Reducing Language bias in Large Vision-Language Models via Multimodal Dual-Attention and Soft-Image Guidance0
FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression0
Visual Contexts Clarify Ambiguous Expressions: A Benchmark DatasetCode0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
LaVida Drive: Vision-Text Interaction VLM for Autonomous Driving with Token Selection, Recovery and Enhancement0
Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model0
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Memory-Augmented Multimodal LLMs for Surgical VQA via Self-Contained Inquiry0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
A Comprehensive Survey on Visual Question Answering Datasets and Algorithms0
Large Vision-Language Models for Remote Sensing Visual Question Answering0
AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference0
Visual question answering based evaluation metrics for text-to-image generation0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
SparrowVQE: Visual Question Explanation for Course Content UnderstandingCode0
Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent0
Aligned Vector Quantization for Edge-Cloud Collabrative Vision-Language Models0
M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding0
Seeing is Deceiving: Exploitation of Visual Pathways in Multi-Modal Language Models0
SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering0
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval0
NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA0
Multimodal Commonsense Knowledge Distillation for Visual Question Answering0
MME-Finance: A Multimodal Finance Benchmark for Expert-level Understanding and Reasoning0
From Pixels to Prose: Advancing Multi-Modal Language Models for Remote Sensing0
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering0
RS-MoE: Mixture of Experts for Remote Sensing Image Captioning and Visual Question Answering0
A Visual Question Answering Method for SAR Ship: Breaking the Requirement for Multimodal Dataset Construction and Model Fine-Tuning0
Goal-Oriented Semantic Communication for Wireless Visual Question Answering0
Designing a Robust Radiology Report Generation System0
Right this way: Can VLMs Guide Us to See More to Answer Questions?Code0
SimpsonsVQA: Enhancing Inquiry-Based Learning with a Tailored Dataset0
GRADE: Quantifying Sample Diversity in Text-to-Image Models0
Are VLMs Really BlindCode0
AutoBench-V: Can Large Vision-Language Models Benchmark Themselves?Code0
Attention Overlap Is Responsible for The Entity Missing Problem in Text-to-image Diffusion Models!0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Face-MLLM: A Large Face Perception Model0
Show:102550
← PrevPage 19 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified