SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 401450 of 2177 papers

TitleStatusHype
Aligned Vector Quantization for Edge-Cloud Collabrative Vision-Language Models0
Seeing is Deceiving: Exploitation of Visual Pathways in Multi-Modal Language Models0
SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering0
M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding0
NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA0
VQA^2: Visual Question Answering for Video Quality AssessmentCode2
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval0
From Pixels to Prose: Advancing Multi-Modal Language Models for Remote Sensing0
Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning AgentCode3
MME-Finance: A Multimodal Finance Benchmark for Expert-level Understanding and Reasoning0
Multimodal Commonsense Knowledge Distillation for Visual Question Answering0
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering0
Goal-Oriented Semantic Communication for Wireless Visual Question Answering0
A Visual Question Answering Method for SAR Ship: Breaking the Requirement for Multimodal Dataset Construction and Model Fine-Tuning0
RS-MoE: Mixture of Experts for Remote Sensing Image Captioning and Visual Question Answering0
Designing a Robust Radiology Report Generation System0
Right this way: Can VLMs Guide Us to See More to Answer Questions?Code0
Show Me What and Where has Changed? Question Answering and Grounding for Remote Sensing Change DetectionCode1
Nearest Neighbor Normalization Improves Multimodal RetrievalCode1
SimpsonsVQA: Enhancing Inquiry-Based Learning with a Tailored Dataset0
GRADE: Quantifying Sample Diversity in Text-to-Image Models0
Are VLMs Really BlindCode0
Few-Shot Multimodal Explanation for Visual Question AnsweringCode0
Attention Overlap Is Responsible for The Entity Missing Problem in Text-to-image Diffusion Models!0
Face-MLLM: A Large Face Perception Model0
Efficient Bilinear Attention-based Fusion for Medical Visual Question Answering0
AutoBench-V: Can Large Vision-Language Models Benchmark Themselves?Code0
R-LLaVA: Improving Med-VQA Understanding through Visual Region of Interest0
Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors0
GiVE: Guiding Visual Encoder to Perceive Overlooked Information0
Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal AssistantCode0
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks0
Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering0
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
Progressive Compositionality In Text-to-Image Generative ModelsCode1
Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models0
Visual Question Answering in Ophthalmology: A Progressive and Practical Perspective0
Object-Centric Temporal Consistency via Conditional Autoregressive Inductive Biases0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific ConceptsCode0
ChitroJera: A Regionally Relevant Visual Question Answering Dataset for Bangla0
LLaVA-Ultra: Large Chinese Language and Vision Assistant for Ultrasound0
E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model0
Zero-shot Action Localization via the Confidence of Large Vision-Language Models0
NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples0
ViConsFormer: Constituting Meaningful Phrases of Scene Texts using Transformer-based Method in Vietnamese Text-based Visual Question AnsweringCode0
MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart ProblemsCode1
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
Improving Multi-modal Large Language Model through Boosting Vision Capabilities0
H2OVL-Mississippi Vision Language Models Technical Report0
Show:102550
← PrevPage 9 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified