SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10011050 of 2177 papers

TitleStatusHype
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning0
World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and FilteringCode0
3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models0
TrojVLM: Backdoor Attack Against Vision Language Models0
Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations0
Enhancing Explainability in Multimodal Large Language Models Using Ontological Context0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
A Unified Hallucination Mitigation Framework for Large Vision-Language ModelsCode0
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology0
Vision Language Models Can Parse Floor Plan Maps0
Sparks of Artificial General Intelligence(AGI) in Semiconductor Material Science: Early Explorations into the Next Frontier of Generative AI-Assisted Electron Micrograph Analysis0
CAST: Cross-modal Alignment Similarity Test for Vision Language ModelsCode0
OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities0
Explore the Hallucination on Low-level Perception for MLLMs0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge TypesCode0
Learning to Compress Contexts for Efficient Knowledge-based Visual Question Answering0
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial AttacksCode0
Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding0
VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning0
Breaking Neural Network Scaling Laws with Modularity0
POINTS: Improving Your Vision-language Model with Affordable Strategies0
COLUMBUS: Evaluating COgnitive Lateral Understanding through Multiple-choice reBUSesCode0
OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving0
MOSMOS: Multi-organ segmentation facilitated by medical report supervision0
Blocks as Probes: Dissecting Categorization Ability of Large Multimodal Models0
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?Code0
Kvasir-VQA: A Text-Image Pair GI Tract DatasetCode0
Look, Learn and Leverage (L^3): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment0
Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering0
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
Can Visual Language Models Replace OCR-Based Visual Question Answering Pipelines in Production? A Case Study in Retail0
Can SAR improve RSVQA performance?0
Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis0
Multi-Modal Instruction-Tuning Small-Scale Language-and-Vision Assistant for Semiconductor Electron Micrograph Analysis0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
Towards Human-Level Understanding of Complex Process Engineering Schematics: A Pedagogical, Introspective Multi-Agent Framework for Open-Domain Question Answering0
Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption0
MaVEn: An Effective Multi-granularity Hybrid Visual Encoding Framework for Multimodal Large Language Model0
SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs0
CluMo: Cluster-based Modality Fusion Prompt for Continual Learning in Visual Question AnsweringCode0
Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework0
TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and CompetitionCode0
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge InjectionCode0
Med-PMC: Medical Personalized Multi-modal Consultation with a Proactive Ask-First-Observe-Next ParadigmCode0
Beyond the Hype: A dispassionate look at vision-language models in medical scenario0
Show:102550
← PrevPage 21 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified