SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 501550 of 2177 papers

TitleStatusHype
Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide ImagesCode1
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
Good Questions Help Zero-Shot Image ReasoningCode1
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
Uncertainty-Aware Evaluation for Vision-Language ModelsCode1
Multi-modal Auto-regressive Modeling via Visual WordsCode1
Multimodal Federated Learning via Contrastive Representation EnsembleCode1
MemeCap: A Dataset for Captioning and Interpreting MemesCode1
Change Detection Meets Visual Question AnsweringCode1
OK-VQA: A Visual Question Answering Benchmark Requiring External KnowledgeCode1
Global and Local Semantic Completion Learning for Vision-Language Pre-trainingCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
AI2-THOR: An Interactive 3D Environment for Visual AICode1
GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEsCode1
GMAI-VL-R1: Harnessing Reinforcement Learning for Multimodal Medical ReasoningCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Multimodal fusion of imaging and genomics for lung cancer recurrence predictionCode1
NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup AnnotationsCode1
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?Code1
Generative Bias for Robust Visual Question AnsweringCode1
MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question AnsweringCode1
Gemini: A Family of Highly Capable Multimodal ModelsCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question AnsweringCode1
CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual ScenesCode1
Gated Hierarchical Attention for Image CaptioningCode1
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense ReasoningCode1
MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart ProblemsCode1
Can We Talk Models Into Seeing the World Differently?Code1
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connectionsCode1
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease ProgressionCode1
Found a Reason for me? Weakly-supervised Grounded Visual Question Answering using CapsulesCode1
Are Vision Language Models Ready for Clinical Diagnosis? A 3D Medical Benchmark for Tumor-centric Visual Question AnsweringCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Florence: A New Foundation Model for Computer VisionCode1
Modular Visual Question Answering via Code GenerationCode1
Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual FeaturesCode1
EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language ModelsCode1
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question AnsweringCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
Show:102550
← PrevPage 11 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified