SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 21012150 of 2177 papers

TitleStatusHype
Language bias in Visual Question Answering: A Survey and Taxonomy0
Language Features Matter: Effective Language Representations for Vision-Language Tasks0
From Pixels to Prose: Advancing Multi-Modal Language Models for Remote Sensing0
Language-Image Models with 3D Understanding0
From Pixels to Objects: Cubic Visual Attention for Visual Question Answering0
Language Is Not All You Need: Aligning Perception with Language Models0
From Known to the Unknown: Transferring Knowledge to Answer Questions about Novel Visual and Semantic Concepts0
From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities0
From Images to Textual Prompts: Zero-Shot Visual Question Answering With Frozen Large Language Models0
From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration0
Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data0
freePruner: A Training-free Approach for Large Multimodal Model Acceleration0
Free Form Medical Visual Question Answering in Radiology0
Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption0
Large Scale Scene Text Verification with Guided Attention0
Large Vision-Language Models for Remote Sensing Visual Question Answering0
Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models0
Latent Variable Models for Visual Question Answering0
Fooling Vision and Language Models Despite Localization and Attention Mechanism0
LaVida Drive: Vision-Text Interaction VLM for Autonomous Driving with Token Selection, Recovery and Enhancement0
LAVIS: A Library for Language-Vision Intelligence0
VALSE: A Task-Independent Benchmark for Vision and Language Models centered on Linguistic Phenomena0
Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering0
LCV2: An Efficient Pretraining-Free Framework for Grounded Visual Question Answering0
FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression0
Answer-checking in Context: A Multi-modal FullyAttention Network for Visual Question Answering0
Learning Answer Embeddings for Visual Question Answering0
Learning by Asking Questions0
A Novel Framework for Robustness Analysis of Visual QA Models0
Learning by Hallucinating: Vision-Language Pre-training with Weak Supervision0
Learning Compositional Representation for Few-shot Visual Question Answering0
Variational Disentangled Attention for Regularized Visual Dialog0
Variational Visual Question Answering0
A Novel Attention-based Aggregation Function to Combine Vision and Language0
FOCUS: Internal MLLM Representations for Efficient Fine-Grained Visual Question Answering0
VCD: Knowledge Base Guided Visual Commonsense Discovery in Images0
Learning How To Ask: Cycle-Consistency Refines Prompts in Multimodal Foundation Models0
Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering0
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues0
An Open-Source Software Toolkit & Benchmark Suite for the Evaluation and Adaptation of Multimodal Action Models0
Learning Rich Image Region Representation for Visual Question Answering0
FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks0
Learning Sparse Mixture of Experts for Visual Question Answering0
Learning Sparsity for Effective and Efficient Music Performance Question Answering0
Annotation Methodologies for Vision and Language Dataset Creation0
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts0
FlexCap: Describe Anything in Images in Controllable Detail0
Learning to Compose Diversified Prompts for Image Emotion Classification0
Show:102550
← PrevPage 43 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified