SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20012010 of 2177 papers

TitleStatusHype
What Large Language Models Bring to Text-rich VQA?0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
Improving Visual Question Answering by Referring to Generated Paragraph Captions0
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions0
Improving VQA and its Explanations \\ by Comparing Competing Explanations0
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions0
Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks0
A Restricted Visual Turing Test for Deep Scene and Event Understanding0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering0
Show:102550
← PrevPage 201 of 218Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified