SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 201250 of 2177 papers

TitleStatusHype
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models0
SplatTalk: 3D VQA with Gaussian Splatting0
MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering0
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
Enhancing SAM with Efficient Prompting and Preference Optimization for Semi-supervised Medical Image Segmentation0
AnyAnomaly: Zero-Shot Customizable Video Anomaly Detection with LVLMCode2
Question-Aware Gaussian Experts for Audio-Visual Question AnsweringCode1
Enhancing Vietnamese VQA through Curriculum Learning on Raw and Augmented Text RepresentationsCode0
OWLViz: An Open-World Benchmark for Visual Question Answering0
BioD2C: A Dual-level Semantic Consistency Constraint Framework for Biomedical VQACode0
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language ModelsCode0
FunBench: Benchmarking Fundus Reading Skills of MLLMs0
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering0
Fine-Grained Retrieval-Augmented Generation for Visual Question Answering0
MedHallTune: An Instruction-Tuning Benchmark for Mitigating Medical Hallucination in Vision-Language ModelsCode0
Can Large Language Models Unveil the Mysteries? An Exploration of Their Ability to Unlock Information in Complex Scenarios0
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning0
Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation0
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based InferenceCode0
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA0
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMsCode3
All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark0
Retrieval-Augmented Visual Question Answering via Built-in Autoregressive Search Engines0
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images0
TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba0
Directional Gradient Projection for Robust Fine-Tuning of Foundation Models0
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language ModelsCode2
ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action ModelCode1
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning0
PitVQA++: Vector Matrix-Low-Rank Adaptation for Open-Ended Visual Question Answering in Pituitary SurgeryCode0
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models0
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference OptimizationCode2
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object ManipulationCode3
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease ProgressionCode1
"See the World, Discover Knowledge": A Chinese Factuality Evaluation for Large Vision Language Models0
Visual Graph Question Answering with ASP and LLMs for Language Parsing0
Abduction of Domain Relationships from Data for VQA0
EmoAssist: Emotional Assistant for Visual Impairment Community0
Vision-Language Models for Edge Networks: A Comprehensive Survey0
ClinKD: Cross-Modal Clinical Knowledge Distiller For Multi-Task Medical ImagesCode0
Performance Analysis of Traditional VQA Models Under Limited Computational Resources0
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment0
Efficient Few-Shot Continual Learning in Vision-Language Models0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?Code1
DocMIA: Document-Level Membership Inference Attacks against DocVQA ModelsCode0
Exploring Spatial Language Grounding Through Referring Expressions0
Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language ModelsCode1
Show:102550
← PrevPage 5 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified