SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 651700 of 2177 papers

TitleStatusHype
Vision-Language Models Meet Meteorology: Developing Models for Extreme Weather Events Detection with HeatmapsCode1
SHMamba: Structured Hyperbolic State Space Model for Audio-Visual Question Answering0
Yo'LLaVA: Your Personalized Language and Vision AssistantCode2
Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns0
Towards Vision-Language Geo-Foundation Model: A SurveyCode2
Towards Multilingual Audio-Visual Question AnsweringCode0
Explore the Limits of Omni-modal Pretraining at ScaleCode2
Advancing High Resolution Vision-Language Models in BiomedicineCode1
What If We Recaption Billions of Web Images with LLaMA-3?0
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language TasksCode5
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications0
Benchmarking Vision-Language Contrastive Methods for Medical Representation LearningCode0
RS-Agent: Automating Remote Sensing Tasks through Intelligent AgentCode2
VCR: A Task for Pixel-Level Complex Reasoning in Vision Language Models via Restoring Occluded TextCode1
Solution for SMART-101 Challenge of CVPR Multi-modal Algorithmic Reasoning Task 20240
CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark0
Towards Semantic Equivalence of Tokenization in Multimodal LLM0
DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs0
RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation0
Understanding Information Storage and Transfer in Multi-modal Large Language Models0
Balancing Performance and Efficiency in Zero-shot Robotic Navigation0
Wings: Learning Multimodal LLMs without Text-only ForgettingCode5
From Redundancy to Relevance: Information Flow in LVLMs Across Reasoning TasksCode2
Diffusion-Refined VQA Annotations for Semi-Supervised Gaze FollowingCode0
Story Generation from Visual Inputs: Techniques, Related Tasks, and Challenges0
Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering0
Re-ReST: Reflection-Reinforced Self-Training for Language AgentsCode1
Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language ModelsCode2
Mixture of Rationale: Multi-Modal Reasoning Mixture for Visual Question Answering0
Selectively Answering Visual Questions0
Video Question Answering for People with Visual Impairments Using an Egocentric 360-Degree Camera0
VQA Training Sets are Self-play Environments for Generating Few-shot Pools0
Enhancing Large Vision Language Models with Self-Training on Image ComprehensionCode2
Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQACode1
Instruction-Guided Visual MaskingCode1
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals0
Reverse Image Retrieval Cues Parametric Memory in Multimodal LLMsCode1
Evaluating Zero-Shot GPT-4V Performance on 3D Visual Question Answering Benchmarks0
MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification0
Data-augmented phrase-level alignment for mitigating object hallucination0
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning0
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V TrustworthinessCode11
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementCode2
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision ModelsCode2
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal ModelsCode2
Prompt-Aware Adapter: Towards Learning Adaptive Visual Tokens for Multimodal Large Language Models0
Reframing Spatial Reasoning Evaluation in Language Models: A Real-World Simulation Benchmark for Qualitative ReasoningCode0
LOVA3: Learning to Visual Question Answering, Asking and AssessmentCode2
A Survey on Vision-Language-Action Models for Embodied AICode4
SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge0
Show:102550
← PrevPage 14 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified