SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 276300 of 2177 papers

TitleStatusHype
VividMed: Vision Language Model with Versatile Visual Grounding for MedicineCode1
Towards Foundation Models for 3D Vision: How Close Are We?Code1
Skipping Computations in Multimodal LLMsCode1
Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language BootstrappingCode1
ActiView: Evaluating Active Perception Ability for Multimodal Large Language ModelsCode1
MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM IntegrationCode1
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense ReasoningCode1
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness RecognitionCode1
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMsCode1
LIME: Less Is More for MLLM EvaluationCode1
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
V-RoAst: Visual Road Assessment. Can VLM be a Road Safety Assessor Using the iRAP Standard?Code1
Visual Agents as Fast and Slow ThinkersCode1
Surgical-VQLA++: Adversarial Contrastive Learning for Calibrated Robust Visual Question-Localized Answering in Robotic SurgeryCode1
Boosting Audio Visual Question Answering via Key Semantic-Aware CuesCode1
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
Learning Trimodal Relation for AVQA with Missing ModalityCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack BenchmarkCode1
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
MM-Instruct: Generated Visual Instructions for Large Multimodal Model AlignmentCode1
STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-AnsweringCode1
Show:102550
← PrevPage 12 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified