SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 13511400 of 2177 papers

TitleStatusHype
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception0
DLIP: Distilling Language-Image Pre-training0
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE0
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes0
VQA Therapy: Exploring Answer Differences by Visually Grounding AnswersCode0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models0
Learning the meanings of function words from grounded language using a visual question answering modelCode0
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored ModelsCode0
RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order LogicCode0
ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders0
Context-VQA: Towards Context-Aware and Purposeful Visual Question AnsweringCode0
BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering0
LOIS: Looking Out of Instance Semantics for Visual Question Answering0
Robust Visual Question Answering: Datasets, Methods, and Future Challenges0
A reinforcement learning approach for VQA validation: an application to diabetic macular edema grading0
Generative Visual Question Answering0
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation0
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous drivingCode0
PAT: Parallel Attention Transformer for Visual Question Answering in Vietnamese0
A scoping review on multimodal deep learning in biomedical images and texts0
Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning0
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering0
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question AnsweringCode0
Switch-BERT: Learning to Model Multimodal Interactions by Switching Attention and Input0
Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck0
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible AdapterCode0
Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories0
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent0
Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP TrainingCode0
Visual Question Answering (VQA) on Images with Superimposed Text0
A Survey of Vision-Language Pre-training from the Lens of Multimodal Machine Translation0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Diversifying Joint Vision-Language Tokenization Learning0
Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes0
LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing0
Evaluating the Capabilities of Multi-modal Reasoning Models with Synthetic Task Data0
Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training0
Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA0
Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models0
Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge0
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa LanguageCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
Zero-shot Visual Question Answering with Language Model FeedbackCode0
Mindstorms in Natural Language-Based Societies of Mind0
GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions0
Measuring Faithful and Plausible Visual Grounding in VQACode0
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
Show:102550
← PrevPage 28 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified