SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 551600 of 2177 papers

TitleStatusHype
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
Found a Reason for me? Weakly-supervised Grounded Visual Question Answering using CapsulesCode1
Nearest Neighbor Normalization Improves Multimodal RetrievalCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
Multi-Step Visual Reasoning with Visual Tokens Scaling and VerificationCode1
Good Questions Help Zero-Shot Image ReasoningCode1
FloodNet: A High Resolution Aerial Imagery Dataset for Post Flood Scene UnderstandingCode1
Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language ModelsCode1
Florence: A New Foundation Model for Computer VisionCode1
MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question AnsweringCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual FeaturesCode1
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual ScenesCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense GraphsCode1
Multimodal Prompt Retrieval for Generative Visual Question AnsweringCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
Multi-modal Pre-training for Medical Vision-language Understanding and Generation: An Empirical Study with A New BenchmarkCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
Pano-AVQA: Grounded Audio-Visual Question Answering on 360deg VideosCode1
Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-TrainingCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
AI2-THOR: An Interactive 3D Environment for Visual AICode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question AnsweringCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Multimodality Representation Learning: A Survey on Evolution, Pretraining and Its ApplicationsCode1
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
CaMML: Context-Aware Multimodal Learner for Large ModelsCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language ModelsCode1
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
Change Detection Meets Visual Question AnsweringCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Multiple Meta-model Quantifying for Medical Visual Question AnsweringCode1
A-OKVQA: A Benchmark for Visual Question Answering using World KnowledgeCode1
Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language BootstrappingCode1
Multimodal Federated Learning via Contrastive Representation EnsembleCode1
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
Probing Image-Language Transformers for Verb UnderstandingCode1
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
Progressive Compositionality In Text-to-Image Generative ModelsCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified