SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 19261950 of 2177 papers

TitleStatusHype
Learning Representations of Sets through Optimized PermutationsCode0
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode0
ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical ImagesCode0
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason ObjectivesCode0
Learning from Lexical Perturbations for Consistent Visual Question AnsweringCode0
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
Learning Convolutional Text Representations for Visual Question AnsweringCode0
Attribute Diversity Determines the Systematicity Gap in VQACode0
What value do explicit high level concepts have in vision to language problems?Code0
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring ExpressionsCode0
Learning content and context with language bias for Visual Question AnsweringCode0
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural SupervisionCode0
The Promise of Premise: Harnessing Question Premises in Visual Question AnsweringCode0
Attention on Attention: Architectures for Visual Question Answering (VQA)Code0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Ask Your Neurons: A Deep Learning Approach to Visual Question AnsweringCode0
Learning Conditioned Graph Structures for Interpretable Visual Question AnsweringCode0
QAVA: Query-Agnostic Visual Attack to Large Vision-Language ModelsCode0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language TransformersCode0
QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual ReasoningCode0
QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without RetrainingCode0
Quantifying and Alleviating the Language Prior Problem in Visual Question AnsweringCode0
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-TuningCode0
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media ContextsCode0
Show:102550
← PrevPage 78 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified