SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 16761700 of 2177 papers

TitleStatusHype
Single-Modal Entropy based Active Learning for Visual Question Answering0
Adversarial Regularization for Visual Question Answering: Strengths, Shortcomings, and Side Effects0
SITE: towards Spatial Intelligence Thorough Evaluation0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
Calibrating Uncertainty Quantification of Multi-Modal LLMs using Grounding0
CAD -- Contextual Multi-modal Alignment for Dynamic AVQA0
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models0
Small Language Model Meets with Reinforced Vision Vocabulary0
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
Adversarial Multimodal Network for Movie Question Answering0
SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM0
SocialGesture: Delving into Multi-person Gesture Understanding0
Bridging the Semantic Gaps: Improving Medical VQA Consistency with LLM-Augmented Question Sets0
VL-BEiT: Generative Vision-Language Pretraining0
Solution for SMART-101 Challenge of CVPR Multi-modal Algorithmic Reasoning Task 20240
Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 20230
Solving Visual Madlibs with Multiple Cues0
Adversarial Attacks Beyond the Image Space0
Sparks of Artificial General Intelligence(AGI) in Semiconductor Material Science: Early Explorations into the Next Frontier of Generative AI-Assisted Electron Micrograph Analysis0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
Bridge Damage Cause Estimation Using Multiple Images Based on Visual Question Answering0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
Show:102550
← PrevPage 68 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified