SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 9511000 of 2177 papers

TitleStatusHype
Efficient Bilinear Attention-based Fusion for Medical Visual Question Answering0
R-LLaVA: Improving Med-VQA Understanding through Visual Region of Interest0
Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors0
GiVE: Guiding Visual Encoder to Perceive Overlooked Information0
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks0
Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal AssistantCode0
Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering0
Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models0
Visual Question Answering in Ophthalmology: A Progressive and Practical Perspective0
Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models0
Object-Centric Temporal Consistency via Conditional Autoregressive Inductive Biases0
CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific ConceptsCode0
LLaVA-Ultra: Large Chinese Language and Vision Assistant for Ultrasound0
ChitroJera: A Regionally Relevant Visual Question Answering Dataset for Bangla0
ViConsFormer: Constituting Meaningful Phrases of Scene Texts using Transformer-based Method in Vietnamese Text-based Visual Question AnsweringCode0
Zero-shot Action Localization via the Confidence of Large Vision-Language Models0
NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples0
E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model0
RescueADI: Adaptive Disaster Interpretation in Remote Sensing Images with Autonomous Agents0
Improving Multi-modal Large Language Model through Boosting Vision Capabilities0
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
H2OVL-Mississippi Vision Language Models Technical Report0
γ-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models0
Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models0
Difficult Task Yes but Simple Task No: Unveiling the Laziness in Multimodal LLMsCode0
OMCAT: Omni Context Aware Transformer0
MMAR: Towards Lossless Multi-Modal Auto-Regressive Probabilistic Modeling0
Eliminating the Language Bias for Visual Question Answering with fine-grained Causal Intervention0
Surgical-LLaVA: Toward Surgical Scenario Understanding via Large Language and Vision Models0
MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Declarative Knowledge Distillation from Large Language Models for Visual Question Answering DatasetsCode0
Zero-shot Commonsense Reasoning over Machine ImaginationCode0
ViT3D Alignment of LLaMA3: 3D Medical Image Report Generation0
Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training0
Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision0
PAR: Prompt-Aware Token Reduction Method for Efficient Large Multimodal Models0
Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning0
Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and BeyondCode0
Core Tokensets for Data-efficient Sequential Training of TransformersCode0
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital EnvironmentsCode0
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks0
MM-R^3: On (In-)Consistency of Multi-modal Large Language Models (MLLMs)0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
Gamified crowd-sourcing of high-quality data for visual fine-tuning0
Backdooring Vision-Language Models with Out-Of-Distribution Data0
Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities0
BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured DataCode0
FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks0
Unleashing the Potentials of Likelihood Composition for Multi-modal Language ModelsCode0
Show:102550
← PrevPage 20 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified