SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20512075 of 2177 papers

TitleStatusHype
Towards Visual Question Answering on Pathology ImagesCode0
Active Learning for Visual Question Answering: An Empirical StudyCode0
Improved RAMEN: Towards Domain Generalization for Visual Question AnsweringCode0
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question AnsweringCode0
RUBi: Reducing Unimodal Biases for Visual Question AnsweringCode0
RUBi: Reducing Unimodal Biases in Visual Question AnsweringCode0
Image Content Generation with Causal ReasoningCode0
Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP TrainingCode0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal CluesCode0
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
IMAD: IMage-Augmented multi-modal DialogueCode0
Beyond Bilinear: Generalized Multimodal Factorized High-order Pooling for Visual Question AnsweringCode0
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Transfer Learning via Unsupervised Task Discovery for Visual Question AnsweringCode0
Transformer Module Networks for Systematic Generalization in Visual Question AnsweringCode0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
Scene Graph Prediction with Limited LabelsCode0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
Benchmarking Vision-Language Contrastive Methods for Medical Representation LearningCode0
Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual ReasoningCode0
ILLUME: Rationalizing Vision-Language Models through Human InteractionsCode0
Show:102550
← PrevPage 83 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified