SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11761200 of 2177 papers

TitleStatusHype
HAMMR: HierArchical MultiModal React agents for generic VQA0
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement0
Soft-Prompting with Graph-of-Thought for Multi-modal Representation LearningCode0
Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language ModelsCode0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices0
Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
Uncovering Bias in Large Vision-Language Models with Counterfactuals0
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
PropTest: Automatic Property Testing for Improved Visual Programming0
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MyVLM: Personalizing VLMs for User-Specific Queries0
VL-Mamba: Exploring State Space Models for Multimodal Learning0
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs0
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?0
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
FlexCap: Describe Anything in Images in Controllable Detail0
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis0
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
Show:102550
← PrevPage 48 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified