SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10511100 of 2177 papers

TitleStatusHype
IIU: Independent Inference Units for Knowledge-based Visual Question AnsweringCode0
Enhancing Visual Question Answering through Ranking-Based Hybrid Training and Multimodal Fusion0
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
Revisiting Multi-Modal LLM Evaluation0
Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models0
Optimus: Accelerating Large-Scale Multi-Modal LLM Training by Bubble Exploitation0
Targeted Visual Prompting for Medical Visual Question AnsweringCode0
LLaVA-OneVision: Easy Visual Task TransferCode0
MMPKUBase: A Comprehensive and High-quality Chinese Multi-modal Knowledge Graph0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering0
SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving0
Pyramid Coder: Hierarchical Code Generator for Compositional Visual Question Answering0
VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Tasks0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering0
VILA^2: VILA Augmented VILA0
Imperfect Vision Encoders: Efficient and Robust Tuning for Vision-Language Models0
Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language ModelsCode0
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person ViewCode0
ProcTag: Process Tagging for Assessing the Efficacy of Document Instruction Data0
Multimodal Reranking for Knowledge-Intensive Visual Question Answering0
EchoSight: Advancing Visual-Language Models with Wiki Knowledge0
TM-PATHVQA:90000+ Textless Multilingual Questions for Medical Visual Question Answering0
Benchmarking Vision Language Models for Cultural Understanding0
Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images0
Extracting Training Data from Document-Based VQA Models0
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving0
Large Language Models Understand LayoutCode0
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge0
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness0
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review0
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
Claude 3.5 Sonnet Model Card Addendum0
GPT-4V Explorations: Mining Autonomous Driving0
MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Show:102550
← PrevPage 22 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified