SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 851900 of 2177 papers

TitleStatusHype
Probing Visual Language Priors in VLMs0
MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation ModelsCode0
Enhanced Multimodal RAG-LLM for Accurate Visual Question Answering0
UniRS: Unifying Multi-temporal Remote Sensing Tasks through Vision Language ModelsCode0
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language ModelsCode0
ErgoChat: a Visual Query System for the Ergonomic Risk Assessment of Construction Workers0
LININ: Logic Integrated Neural Inference Network for Explanatory Visual Question AnsweringCode0
Multi-Agents Based on Large Language Models for Knowledge-based Visual Question Answering0
TextMatch: Enhancing Image-Text Consistency Through Multimodal Optimization0
Survey of Large Multimodal Model Datasets, Application Categories and Taxonomy0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
Cross-Lingual Text-Rich Visual Comprehension: An Information Theory PerspectiveCode0
FFA Sora, video generation as fundus fluorescein angiography simulator0
Prompting Large Language Models with Rationale Heuristics for Knowledge-based Visual Question Answering0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional GeneralizationCode0
FedPIA -- Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning0
Unveiling Uncertainty: A Deep Dive into Calibration and Performance of Multimodal Large Language ModelsCode0
Consistency of Compositional Generalization across Multiple LevelsCode0
A Concept-Centric Approach to Multi-Modality Learning0
Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal CluesCode0
CPath-Omni: A Unified Multimodal Foundation Model for Patch and Whole Slide Image Analysis in Computational Pathology0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track0
Damage Assessment after Natural Disasters with UAVs: Semantic Feature Extraction using Deep Learning0
Patch-level Sounding Object Tracking for Audio-Visual Question Answering0
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Discrete Subgraph Sampling for Interpretable Graph based Visual Question AnsweringCode0
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Barking Up The Syntactic Tree: Enhancing VLM Training with Syntactic Losses0
How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey0
A Multimodal Social Agent0
Can We Generate Visual Programs Without Prompting LLMs?0
MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal ModelsCode0
ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance0
Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels0
FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question AnsweringCode0
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora0
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
EACO: Enhancing Alignment in Multimodal LLMs via Critical Observation0
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
Copy-Move Forgery Detection and Question Answering for Remote Sensing ImageCode0
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
Understanding the World's Museums through Vision-Language ReasoningCode0
DLaVA: Document Language and Vision Assistant for Answer Localization with Enhanced Interpretability and TrustworthinessCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection & Grounding in VLMs0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
Show:102550
← PrevPage 18 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified