SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10511100 of 2177 papers

TitleStatusHype
LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning0
Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models0
FedPIA -- Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning0
LLaVA-Octopus: Unlocking Instruction-Driven Adaptive Projector Fusion for Video Understanding0
Zero-Shot Transfer VQA Dataset0
Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile Gaussian Feature Fields0
Fast or Slow? Integrating Fast Intuition and Deliberate Thinking for Enhancing Visual Question Answering0
FashionVQA: A Domain-Specific Visual Question Answering System0
LLaVA-Ultra: Large Chinese Language and Vision Assistant for Ultrasound0
Face-MLLM: A Large Face Perception Model0
VGNMN: Video-grounded Neural Module Networks for Video-Grounded Dialogue Systems0
Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning0
LMME3DHF: Benchmarking and Evaluating Multimodal 3D Human Face Generation with LMMs0
EyeFound: A Multimodal Generalist Foundation Model for Ophthalmic Imaging0
Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling0
VGNMN: Video-grounded Neural Module Network to Video-Grounded Language Tasks0
Locate Then Generate: Bridging Vision and Language with Bounding Box for Scene-Text VQA0
Extracting Training Data from Document-Based VQA Models0
Achieving Human Parity on Visual Question Answering0
Logically Consistent Loss for Visual Question Answering0
LOIS: Looking Out of Instance Semantics for Visual Question Answering0
Looking Beyond Text: Reducing Language bias in Large Vision-Language Models via Multimodal Dual-Attention and Soft-Image Guidance0
Look, Learn and Leverage (L^3): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment0
Exploring Weaknesses of VQA Models through Attribution Driven Insights0
Look, Read and Ask: Learning to Ask Questions by Reading Text in Images0
When are Lemons Purple? The Concept Association Bias of Vision-Language Models0
Accuracy vs. Complexity: A Trade-off in Visual Question Answering Models0
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions0
An Empirical Study on Leveraging Scene Graphs for Visual Question Answering0
LRRA:A Transparent Neural-Symbolic Reasoning Framework for Real-World Visual Question Answering0
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval0
LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation0
Exploring Spatial Language Grounding Through Referring Expressions0
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
An Empirical Study of Batch Normalization and Group Normalization in Conditional Computation0
Exploring Question Decomposition for Zero-Shot VQA0
Exploring Human-like Attention Supervision in Visual Question Answering0
M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning0
MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering0
Exploring Diverse Methods in Visual Question Answering0
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
Making the Most of What You Have: Adapting Pre-trained Visual Language Models in the Low-data Regime0
An Empirical Evaluation of Visual Question Answering for Novel Objects0
Explore the Hallucination on Low-level Perception for MLLMs0
Video Question Answering for People with Visual Impairments Using an Egocentric 360-Degree Camera0
MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning0
Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering0
Show:102550
← PrevPage 22 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified