SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 201250 of 2177 papers

TitleStatusHype
Med-Flamingo: a Multimodal Medical Few-shot LearnerCode2
GPT4RoI: Instruction Tuning Large Language Model on Region-of-InterestCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
Shikra: Unleashing Multimodal LLM's Referential Dialogue MagicCode2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical TasksCode2
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving ScenarioCode2
OCRBench: On the Hidden Mystery of OCR in Large Multimodal ModelsCode2
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction TuningCode2
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and ActionCode2
PaLM-E: An Embodied Multimodal Language ModelCode2
Prophet: Prompting Large Language Models with Complementary Answer Heuristics for Knowledge-based Visual Question AnsweringCode2
Visual Programming: Compositional visual reasoning without trainingCode2
PoseScript: Linking 3D Human Poses and Natural LanguageCode2
Retrieval Augmented Visual Question Answering with Outside KnowledgeCode2
Vision-Language Pre-Training with Triple Contrastive LearningCode2
MDETR - Modulated Detection for End-to-End Multi-Modal UnderstandingCode2
Unified Vision-Language Pre-Training for Image Captioning and VQACode2
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
SimpleDoc: Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative RefinementCode1
Multi-Step Visual Reasoning with Visual Tokens Scaling and VerificationCode1
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD SoftwareCode1
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga UnderstandingCode1
MineAnyBuild: Benchmarking Spatial Planning for Open-world AI AgentsCode1
Visualized Text-to-Image RetrievalCode1
SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable RewardsCode1
Are Vision Language Models Ready for Clinical Diagnosis? A 3D Medical Benchmark for Tumor-centric Visual Question AnsweringCode1
VEAttack: Downstream-agnostic Vision Encoder Attack against Large Vision Language ModelsCode1
Benchmarking Retrieval-Augmented Multimomal Generation for Document Question AnsweringCode1
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head SuppressionCode1
Reasoning-OCR: Can Large Multimodal Models Solve Complex Logical Reasoning Problems from OCR Cues?Code1
MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical TasksCode1
UniBiomed: A Universal Foundation Model for Grounded Biomedical Image InterpretationCode1
ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step VerificationCode1
Benchmarking Multimodal Mathematical Reasoning with Explicit Visual DependencyCode1
ReasonDrive: Efficient Visual Question Answering for Autonomous Vehicles with Reasoning-Enhanced Small Vision-Language ModelsCode1
A Survey on Efficient Vision-Language ModelsCode1
STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage Security InspectionCode1
GMAI-VL-R1: Harnessing Reinforcement Learning for Multimodal Medical ReasoningCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
PAVE: Patching and Adapting Video Large Language ModelsCode1
Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language ModelsCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language ModelsCode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
Question-Aware Gaussian Experts for Audio-Visual Question AnsweringCode1
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
Show:102550
← PrevPage 5 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified