SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 701750 of 2177 papers

TitleStatusHype
Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
SITE: towards Spatial Intelligence Thorough Evaluation0
Probabilistic Embeddings for Frozen Vision-Language Models: Uncertainty Quantification with Gaussian Process Latent Variable ModelsCode0
Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks0
Structure Causal Models and LLMs Integration in Medical Visual Question Answering0
Sim2Real Transfer for Vision-Based Grasp VerificationCode0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
Adaptive Token Boundaries: Integrating Human Chunking Mechanisms into Multimodal LLMs0
Knowledge-Augmented Language Models Interpreting Structured Chest X-Ray Findings0
Grounding Task Assistance with Multimodal Cues from a Single Demonstration0
Transferable Adversarial Attacks on Black-Box Vision-Language Models0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense EvaluationCode0
Calibrating Uncertainty Quantification of Multi-Modal LLMs using Grounding0
LMME3DHF: Benchmarking and Evaluating Multimodal 3D Human Face Generation with LMMs0
SpatialReasoner: Towards Explicit and Generalizable 3D Spatial Reasoning0
Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction0
TraveLLaMA: Facilitating Multi-modal Large Language Models to Understand Urban Scenes and Provide Travel Assistance0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
Neglected Risks: The Disturbing Reality of Children's Images in Datasets and the Urgent Call for Accountability0
Hadamard product in deep learning: Introduction, Advances and Challenges0
Bridging the Semantic Gaps: Improving Medical VQA Consistency with LLM-Augmented Question Sets0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
QAVA: Query-Agnostic Visual Attack to Large Vision-Language ModelsCode0
LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation0
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks0
VDocRAG: Retrieval-Augmented Generation over Visually-Rich Documents0
MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework0
NoTeS-Bank: Benchmarking Neural Transcription and Search for Scientific Notes Understanding0
AstroLLaVA: towards the unification of astronomical data and natural language0
Beyond the Frame: Generating 360° Panoramic Videos from Perspective Videos0
Data Metabolism: An Efficient Data Design Schema For Vision Language Model0
TokenFocus-VQA: Enhancing Text-to-Image Alignment with Position-Aware Focus and Multi-Perspective Aggregations on LVLMs0
Resource-efficient Inference with Foundation Model ProgramsCode0
RS-RAG: Bridging Remote Sensing Imagery and Comprehensive Knowledge with a Multi-Modal Dataset and Retrieval-Augmented Generation Model0
Enhancing Compositional Reasoning in Vision-Language Models with Synthetic Preference DataCode0
Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion0
QIRL: Boosting Visual Question Answering via Optimized Question-Image Relation Learning0
SocialGesture: Delving into Multi-person Gesture Understanding0
SViQA: A Unified Speech-Vision Multimodal Model for Textless Visual Question Answering0
MPDrive: Improving Spatial Understanding with Marker-Based Prompt Learning for Autonomous Driving0
KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean LanguageCode0
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark0
JEEM: Vision-Language Understanding in Four Arabic Dialects0
CTRL-O: Language-Controllable Object-Centric Visual Representation Learning0
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Feature4X: Bridging Any Monocular Video to 4D Agentic AI with Versatile Gaussian Feature Fields0
Show:102550
← PrevPage 15 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified