SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20012025 of 2177 papers

TitleStatusHype
What Large Language Models Bring to Text-rich VQA?0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
Improving Visual Question Answering by Referring to Generated Paragraph Captions0
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions0
Improving VQA and its Explanations \\ by Comparing Competing Explanations0
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions0
Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks0
A Restricted Visual Turing Test for Deep Scene and Event Understanding0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering0
InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding0
Generative Visual Question Answering0
Generating Triples with Adversarial Networks for Scene Graph Construction0
Generating Rationales in Visual Question Answering0
InfographicVQA0
Inquire, Interact, and Integrate: A Proactive Agent Collaborative Framework for Zero-Shot Multimodal Medical Reasoning0
Instance-Level Trojan Attacks on Visual Question Answering via Adversarial Learning in Neuron Activation Space0
Generating Natural Questions from Images for Multimodal Assistants0
Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge0
Generalized Hadamard-Product Fusion Operators for Visual Question Answering0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models0
Show:102550
← PrevPage 81 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified