SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 501550 of 2177 papers

TitleStatusHype
Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations0
Emu3: Next-Token Prediction is All You NeedCode3
Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
A Unified Hallucination Mitigation Framework for Large Vision-Language ModelsCode0
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
Phantom of Latent for Large Language and Vision ModelsCode2
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology0
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
Vision Language Models Can Parse Floor Plan Maps0
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any ResolutionCode11
Sparks of Artificial General Intelligence(AGI) in Semiconductor Material Science: Early Explorations into the Next Frontier of Generative AI-Assisted Electron Micrograph Analysis0
Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMsCode1
OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Explore the Hallucination on Low-level Perception for MLLMs0
Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge TypesCode0
One missing piece in Vision and Language: A Survey on Comics UnderstandingCode2
Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering0
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial AttacksCode0
VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning0
EyeCLIP: A visual-language foundation model for multi-modal ophthalmic image analysisCode2
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
LIME: Less Is More for MLLM EvaluationCode1
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
Breaking Neural Network Scaling Laws with Modularity0
POINTS: Improving Your Vision-language Model with Affordable Strategies0
COLUMBUS: Evaluating COgnitive Lateral Understanding through Multiple-choice reBUSesCode0
OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving0
MOSMOS: Multi-organ segmentation facilitated by medical report supervision0
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?Code0
Blocks as Probes: Dissecting Categorization Ability of Large Multimodal Models0
Kvasir-VQA: A Text-Image Pair GI Tract DatasetCode0
Look, Learn and Leverage (L^3): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment0
Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
CogVLM2: Visual Language Models for Image and Video UnderstandingCode9
Can Visual Language Models Replace OCR-Based Visual Question Answering Pipelines in Production? A Case Study in Retail0
Can SAR improve RSVQA performance?0
Multi-Modal Instruction-Tuning Small-Scale Language-and-Vision Assistant for Semiconductor Electron Micrograph Analysis0
Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
Towards Human-Level Understanding of Complex Process Engineering Schematics: A Pedagogical, Introspective Multi-Agent Framework for Open-Domain Question Answering0
Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption0
MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model0
Show:102550
← PrevPage 11 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified