SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 776800 of 2177 papers

TitleStatusHype
Uncovering Bias in Large Vision-Language Models with Counterfactuals0
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language ModelsCode2
JDocQA: Japanese Document Question Answering Dataset for Generative Language ModelsCode1
Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous DrivingCode2
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
Beyond Embeddings: The Promise of Visual Table in Visual ReasoningCode1
Quantifying and Mitigating Unimodal Biases in Multimodal Large Language Models: A Causal PerspectiveCode1
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions0
PropTest: Automatic Property Testing for Improved Visual Programming0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MedPromptX: Grounded Multimodal Prompting for Chest X-ray DiagnosisCode2
LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal ModelsCode2
Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question AnsweringCode1
Language Repository for Long Video UnderstandingCode1
MyVLM: Personalizing VLMs for User-Specific Queries0
VL-Mamba: Exploring State Space Models for Multimodal Learning0
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs0
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
VL-ICL Bench: The Devil in the Details of Multimodal In-Context LearningCode2
Show:102550
← PrevPage 32 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified