SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 226250 of 2177 papers

TitleStatusHype
InfMLLM: A Unified Framework for Visual-Language TasksCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
Improving Selective Visual Question Answering by Learning from Your PeersCode1
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language ReasoningCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
Boosting Audio Visual Question Answering via Key Semantic-Aware CuesCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question AnsweringCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
In Defense of Grid Features for Visual Question AnsweringCode1
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image DetectorsCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Show:102550
← PrevPage 10 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified