SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 951975 of 2177 papers

TitleStatusHype
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel ImagesCode0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
Kvasir-VQA: A Text-Image Pair GI Tract DatasetCode0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship DetectionCode0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital EnvironmentsCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Measuring Faithful and Plausible Visual Grounding in VQACode0
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text RepresentationsCode0
Are VLMs Really BlindCode0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question AnsweringCode0
MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation ModelsCode0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
Enhancing Cross-Prompt Transferability in Vision-Language Models through Contextual Injection of Target TokensCode0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
Enhancing Compositional Reasoning in Vision-Language Models with Synthetic Preference DataCode0
Cascaded Mutual Modulation for Visual ReasoningCode0
LPF: A Language-Prior Feedback Objective Function for De-biased Visual Question AnsweringCode0
Show:102550
← PrevPage 39 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified