SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 19011950 of 2177 papers

TitleStatusHype
Pragmatic Issue-Sensitive Image CaptioningCode0
Enhancing Compositional Reasoning in Vision-Language Models with Synthetic Preference DataCode0
Learning Visual Question Answering by Bootstrapping Hard AttentionCode0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
Learning to Reason: End-to-End Module Networks for Visual Question AnsweringCode0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
End-to-End Instance Segmentation with Recurrent AttentionCode0
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question AnsweringCode0
Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-raysCode0
Learning to Model and Ignore Dataset Bias with Mixed Capacity EnsemblesCode0
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video FeaturesCode0
Probabilistic Embeddings for Frozen Vision-Language Models: Uncertainty Quantification with Gaussian Process Latent Variable ModelsCode0
Co-attending Regions and Detections with Multi-modal Multiplicative Embedding for VQACode0
Learning to Follow Object-Centric Image Editing Instructions FaithfullyCode0
Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question AnsweringCode0
Augmenting Visual Question Answering with Semantic Frame Information in a Multitask Learning ApproachCode0
VinVL+L: Enriching Visual Representation with Location Context in VQACode0
CluMo: Cluster-based Modality Fusion Prompt for Continual Learning in Visual Question AnsweringCode0
TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question AnsweringCode0
Learning to Count Objects in Natural Images for Visual Question AnsweringCode0
Effective Approaches to Batch Parallelization for Dynamic Neural Network ArchitecturesCode0
EaSe: A Diagnostic Tool for VQA based on Answer DiversityCode0
Learning the meanings of function words from grounded language using a visual question answering modelCode0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
Learning Representations of Sets through Optimized PermutationsCode0
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode0
ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical ImagesCode0
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason ObjectivesCode0
Learning from Lexical Perturbations for Consistent Visual Question AnsweringCode0
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
Learning Convolutional Text Representations for Visual Question AnsweringCode0
Attribute Diversity Determines the Systematicity Gap in VQACode0
What value do explicit high level concepts have in vision to language problems?Code0
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring ExpressionsCode0
Learning content and context with language bias for Visual Question AnsweringCode0
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural SupervisionCode0
The Promise of Premise: Harnessing Question Premises in Visual Question AnsweringCode0
Attention on Attention: Architectures for Visual Question Answering (VQA)Code0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Ask Your Neurons: A Deep Learning Approach to Visual Question AnsweringCode0
Learning Conditioned Graph Structures for Interpretable Visual Question AnsweringCode0
QAVA: Query-Agnostic Visual Attack to Large Vision-Language ModelsCode0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language TransformersCode0
QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual ReasoningCode0
QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without RetrainingCode0
Quantifying and Alleviating the Language Prior Problem in Visual Question AnsweringCode0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
Show:102550
← PrevPage 39 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified