SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 51100 of 2177 papers

TitleStatusHype
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
NegVQA: Can Vision Language Models Understand Negation?0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMsCode0
MineAnyBuild: Benchmarking Spatial Planning for Open-world AI AgentsCode1
Benchmarking Large Multimodal Models for Ophthalmic Visual Question Answering with OphthalWeChat0
MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga UnderstandingCode1
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question AnsweringCode0
Visualized Text-to-Image RetrievalCode1
VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool UseCode2
Are Vision Language Models Ready for Clinical Diagnosis? A 3D Medical Benchmark for Tumor-centric Visual Question AnsweringCode1
GC-KBVQA: A New Four-Stage Framework for Enhancing Knowledge Based Visual Question Answering Performance0
InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic ChartsCode3
SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable RewardsCode1
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-raysCode0
VEAttack: Downstream-agnostic Vision Encoder Attack against Large Vision Language ModelsCode1
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head SuppressionCode1
CT-Agent: A Multimodal-LLM Agent for 3D CT Radiology Question Answering0
A Causal Approach to Mitigate Modality Preference Bias in Medical Visual Question Answering0
Steering LVLMs via Sparse Autoencoder for Hallucination Mitigation0
Benchmarking Retrieval-Augmented Multimomal Generation for Document Question AnsweringCode1
Zero-Shot Anomaly Detection in Battery Thermal Images Using Visual Question Answering with Prior Knowledge0
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding0
Grounding Chest X-Ray Visual Question Answering with Generated Radiology Reports0
Human-centered Interactive Learning via MLLMs for Text-to-Image Person Re-identification0
Discovering Pathology Rationale and Token Allocation for Efficient Multimodal Pathology Reasoning0
TinyDrive: Multiscale Visual Question Answering with Selective Token Routing for Autonomous Driving0
Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets0
TimeCausality: Evaluating the Causal Ability in Time Dimension for Vision Language ModelsCode0
Visual Question Answering on Multiple Remote Sensing Image Modalities0
SNAP: A Benchmark for Testing the Effects of Capture Conditions on Fundamental Vision TasksCode0
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMsCode0
Toward Effective Reinforcement Learning Fine-Tuning for Medical VQA in Vision-Language Models0
Debating for Better Reasoning: An Unsupervised Multimodal Approach0
Towards Omnidirectional Reasoning with 360-R1: A Dataset, Benchmark, and GRPO-based Method0
Domain Adaptation of VLM for Soccer Video Understanding0
RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture UnderstandingCode0
Understanding Complexity in VideoQA via Visual Program Generation0
Reasoning-OCR: Can Large Multimodal Models Solve Complex Logical Reasoning Problems from OCR Cues?Code1
MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical TasksCode1
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert ReasonerCode2
TCC-Bench: Benchmarking the Traditional Chinese Culture Understanding Capabilities of MLLMsCode0
End-to-End Vision Tokenizer Tuning0
Variational Visual Question Answering0
Visually Interpretable Subtask Reasoning for Visual Question AnsweringCode0
Multi-Modal Explainable Medical AI Assistant for Trustworthy Human-AI Collaboration0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
Show:102550
← PrevPage 2 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified