SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10761100 of 2177 papers

TitleStatusHype
Benchmarking Vision Language Models for Cultural Understanding0
Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images0
Extracting Training Data from Document-Based VQA Models0
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving0
Large Language Models Understand LayoutCode0
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge0
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness0
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review0
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
Claude 3.5 Sonnet Model Card Addendum0
GPT-4V Explorations: Mining Autonomous Driving0
MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
Show:102550
← PrevPage 44 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified