SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 726750 of 2177 papers

TitleStatusHype
Understanding Figurative Meaning through Explainable Visual EntailmentCode1
Beyond Human Vision: The Role of Large Vision Language Models in Microscope Image Analysis0
CREPE: Coordinate-Aware End-to-End Document Parser0
Enhanced Textual Feature Extraction for Visual Question Answering: A Simple Convolutional Approach0
TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table DomainsCode1
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering by Understanding Vietnamese Text in ImagesCode1
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsCode2
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites0
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering0
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs0
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
Grounded Knowledge-Enhanced Medical VLP for Chest X-Ray0
WangLab at MEDIQA-M3G 2024: Multimodal Medical Answer Generation using Large Language Models0
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language ResamplersCode0
Exploring Diverse Methods in Visual Question Answering0
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering0
Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning0
TextSquare: Scaling up Text-Centric Visual Instruction Tuning0
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question AnsweringCode1
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
Self-Supervised Visual Preference AlignmentCode2
Show:102550
← PrevPage 30 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified