SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 851875 of 2177 papers

TitleStatusHype
Modality-Aware Integration with Large Language Models for Knowledge-based Visual Question Answering0
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models0
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning0
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language ModelsCode3
CoLLaVO: Crayon Large Language and Vision mOdelCode2
VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models0
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question AnsweringCode0
PaLM2-VAdapter: Progressively Aligned Language Model Makes a Strong Vision-language Adapter0
Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language ModelsCode1
Prompt-based Personalized Federated Learning for Medical Visual Question Answering0
Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-raysCode0
OmniMedVQA: A New Large-Scale Comprehensive Evaluation Benchmark for Medical LVLMCode4
Learning How To Ask: Cycle-Consistency Refines Prompts in Multimodal Foundation Models0
PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal RetrieversCode3
Visually Dehallucinative Instruction GenerationCode0
Visual Question Answering Instruction: Unlocking Multimodal Large Language Model To Domain-Specific Visual Multitasks0
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs0
Synthesizing Sentiment-Controlled Feedback For Multimodal Text and Image DataCode0
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
Q-Bench+: A Benchmark for Multi-modal Foundation Models on Low-level Vision from Single Images to PairsCode3
Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchyCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive SurveyCode3
CIC: A Framework for Culturally-Aware Image Captioning0
Show:102550
← PrevPage 35 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified