SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 601650 of 2177 papers

TitleStatusHype
Benchmarking Vision Language Models for Cultural Understanding0
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal PerceptionCode2
Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images0
Extracting Training Data from Document-Based VQA Models0
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving0
Large Language Models Understand LayoutCode0
WSI-VQA: Interpreting Whole Slide Images by Generative Visual Question AnsweringCode2
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge0
Black-box Model Ensembling for Textual and Visual Question Answering via Information FusionCode0
MiniGPT-Med: Large Language Model as a General Interface for Radiology DiagnosisCode2
Visual Robustness Benchmark for Visual Question Answering (VQA)Code0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis0
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs0
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document UnderstandingCode2
TokenPacker: Efficient Visual Projector for Multimodal LLMCode3
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness0
CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding EvaluationCode1
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review0
Efficient Large Multi-modal Models via Visual Context CompressionCode2
MM-Instruct: Generated Visual Instructions for Large Multimodal Model AlignmentCode1
STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-AnsweringCode1
Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA0
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering SystemsCode0
Enhancing Continual Learning in Visual Question Answering with Modality-Aware Feature DistillationCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
MG-LLaVA: Towards Multi-Granularity Visual Instruction TuningCode2
Claude 3.5 Sonnet Model Card Addendum0
MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs0
GPT-4V Explorations: Mining Autonomous Driving0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis0
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?0
LIVE: Learnable In-Context Vector for Visual Question AnsweringCode1
Enhancing Cross-Prompt Transferability in Vision-Language Models through Contextual Injection of Target TokensCode0
Diversify, Rationalize, and Combine: Ensembling Multiple QA Strategies for Zero-shot Knowledge-based VQACode0
VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image UnderstandingCode2
TroL: Traversal of Layers for Large Language and Vision ModelsCode2
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language ModelCode1
LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning0
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMsCode2
Program Synthesis Benchmark for Visual Programming in XLogoOnline Environment0
Mixture-of-Subspaces in Low-Rank AdaptationCode0
Beyond Raw Videos: Understanding Edited Videos with Large Multimodal ModelCode0
VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMsCode1
Precision Empowers, Excess Distracts: Visual Question Answering With Dynamically Infused Knowledge In Language Models0
Detecting and Evaluating Medical Hallucinations in Large Vision Language Models0
Show:102550
← PrevPage 13 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified