SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10011050 of 2177 papers

TitleStatusHype
CogVLM: Visual Expert for Pretrained Language ModelsCode5
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
VQA-GEN: A Visual Question Answering Benchmark for Domain Generalization0
From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities0
Making Large Language Models Better Data CreatorsCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis0
Learning to Follow Object-Centric Image Editing Instructions FaithfullyCode0
Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4VCode1
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in VietnameseCode0
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation0
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image DetectorsCode1
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering PairsCode0
Exploring Question Decomposition for Zero-Shot VQA0
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents0
CAD -- Contextual Multi-modal Alignment for Dynamic AVQA0
Towards Perceiving Small Visual Details in Zero-shot Visual Question Answering with Multimodal LLMsCode1
Multimodal Representations for Teacher-Guided Compositional Visual Reasoning0
LXMERT Model Compression for Visual Question AnsweringCode0
Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond0
SILC: Improving Vision Language Pretraining with Self-Distillation0
A Simple Baseline for Knowledge-Based Visual Question AnsweringCode0
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question AnsweringCode0
Frozen Transformers in Language Models Are Effective Visual Encoder LayersCode2
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
Enhancing BERT-Based Visual Question Answering through Keyword-Driven Sentence Selection0
From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language ModelsCode2
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
Open-Set Knowledge-Based Visual Question Answering with Inference PathsCode0
Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning0
Jaeger: A Concatenation-Based Multi-Transformer VQA Model0
Improving mitosis detection on histopathology images using large vision-language models0
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded DialogCode0
Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 20230
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsCode1
Causal Reasoning through Two Layers of Cognition for Improving Generalization in Visual Question Answering0
Lightweight In-Context Tuning for Multimodal Unified Models0
Improved Baselines with Visual Instruction TuningCode6
On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study0
Improving Automatic VQA Evaluation Using Large Language Models0
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual ContextsCode2
SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering0
Human Mobility Question Answering (Vision Paper)0
Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question AnsweringCode2
Toloka Visual Question Answering BenchmarkCode1
Show:102550
← PrevPage 21 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified