SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 801825 of 2177 papers

TitleStatusHype
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?0
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
FlexCap: Describe Anything in Images in Controllable Detail0
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis0
SQ-LLaVA: Self-Questioning for Large Vision-Language AssistantCode1
Few-Shot VQA with Frozen LLMs: A Tale of Two Approaches0
Knowledge Condensation and Reasoning for Knowledge-based VQA0
Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models0
Parameter Efficient Reinforcement Learning from Human Feedback0
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question AnsweringCode0
VisionGPT: Vision-Language Understanding Agent Using Generalized Multimodal Framework0
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training0
Can We Talk Models Into Seeing the World Differently?Code1
Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization0
Fine-tuning Large Language Models with Sequential Instructions0
Mitigating the Impact of Attribute Editing on Face Recognition0
Beyond Text: Frozen Large Language Models in Visual Signal ComprehensionCode2
MoAI: Mixture of All Intelligence for Large Language and Vision ModelsCode3
Multi-modal Auto-regressive Modeling via Visual WordsCode1
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language ModelsCode3
DeepSeek-VL: Towards Real-World Vision-Language UnderstandingCode7
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of contextCode3
SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM0
Show:102550
← PrevPage 33 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified