SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 21512177 of 2177 papers

TitleStatusHype
SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision TasksCode0
A Dual-Attention Learning Network with Word and Sentence Embedding for Medical Visual Question AnsweringCode0
Visual Question Answering using Deep Learning: A Survey and Performance AnalysisCode0
General Greedy De-bias LearningCode0
Soft-Prompting with Graph-of-Thought for Multi-modal Representation LearningCode0
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language ModelsCode0
Answer Them All! Toward Universal Visual Question Answering ModelsCode0
Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question AnsweringCode0
SOrT-ing VQA Models : Contrastive Gradient Learning for Improved ConsistencyCode0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
SparrowVQE: Visual Question Explanation for Course Content UnderstandingCode0
Game of Sketches: Deep Recurrent Models of Pictionary-style Word GuessingCode0
Sparse and Structured Visual AttentionCode0
Robustness through Data Augmentation Loss ConsistencyCode0
Fully Authentic Visual Question Answering Dataset from Online CommunitiesCode0
D3: Data Diversity Design for Systematic Generalization in Visual Question AnsweringCode0
Visual Question Answering: which investigated applications?Code0
CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-raysCode0
cViL: Cross-Lingual Training of Vision-Language Models using Knowledge DistillationCode0
Speech-Based Visual Question AnsweringCode0
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A PlatformsCode0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Cross-Modal Contrastive Learning for Robust Reasoning in VQACode0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
Focal Visual-Text Attention for Visual Question AnsweringCode0
Cross-Lingual Text-Rich Visual Comprehension: An Information Theory PerspectiveCode0
UniRS: Unifying Multi-temporal Remote Sensing Tasks through Vision Language ModelsCode0
Show:102550
← PrevPage 44 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified