SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 876900 of 2177 papers

TitleStatusHype
Patch-level Sounding Object Tracking for Audio-Visual Question Answering0
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Discrete Subgraph Sampling for Interpretable Graph based Visual Question AnsweringCode0
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Barking Up The Syntactic Tree: Enhancing VLM Training with Syntactic Losses0
How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey0
A Multimodal Social Agent0
Can We Generate Visual Programs Without Prompting LLMs?0
MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal ModelsCode0
ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance0
Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels0
FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question AnsweringCode0
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora0
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
EACO: Enhancing Alignment in Multimodal LLMs via Critical Observation0
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
Copy-Move Forgery Detection and Question Answering for Remote Sensing ImageCode0
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Understanding the World's Museums through Vision-Language ReasoningCode0
DLaVA: Document Language and Vision Assistant for Answer Localization with Enhanced Interpretability and TrustworthinessCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection & Grounding in VLMs0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
Show:102550
← PrevPage 36 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified