SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 9511000 of 2177 papers

TitleStatusHype
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
A simple neural network module for relational reasoningCode0
Kvasir-VQA: A Text-Image Pair GI Tract DatasetCode0
Kvasir-VQA-x1: A Multimodal Dataset for Medical Reasoning and Robust MedVQA in Gastrointestinal EndoscopyCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Modulating early visual processing by languageCode0
A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering ModelsCode0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal ModelsCode0
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question AnsweringCode0
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel ImagesCode0
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question AnsweringCode0
Mixture-of-Subspaces in Low-Rank AdaptationCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital EnvironmentsCode0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text RepresentationsCode0
Measuring Faithful and Plausible Visual Grounding in VQACode0
Are VLMs Really BlindCode0
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
Latent Alignment and Variational AttentionCode0
Answer Questions with Right Image Regions: A Visual Attention Regularization ApproachCode0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
Enhancing Cross-Prompt Transferability in Vision-Language Models through Contextual Injection of Target TokensCode0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
LayoutLMv3: Pre-training for Document AI with Unified Text and Image MaskingCode0
Enhancing Compositional Reasoning in Vision-Language Models with Synthetic Preference DataCode0
Cascaded Mutual Modulation for Visual ReasoningCode0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
Answer Them All! Toward Universal Visual Question Answering ModelsCode0
MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation ModelsCode0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Visual Question Answering: which investigated applications?Code0
End-to-End Instance Segmentation with Recurrent AttentionCode0
End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video FeaturesCode0
LPF: A Language-Prior Feedback Objective Function for De-biased Visual Question AnsweringCode0
LXMERT Model Compression for Visual Question AnsweringCode0
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question AnsweringCode0
Marten: Visual Question Answering with Mask Generation for Multi-modal Document UnderstandingCode0
Logical Implications for Visual Question Answering ConsistencyCode0
Locally Smoothed Neural NetworksCode0
LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic SurgeryCode0
Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a Class-imbalance ViewCode0
Show:102550
← PrevPage 20 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified