SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 15511600 of 2177 papers

TitleStatusHype
Towards Top-Down Reasoning: An Explainable Multi-Agent Approach for Visual Question Answering0
Towards Transparent AI Systems: Interpreting Visual Question Answering Models0
Towards Truly Zero-shot Compositional Visual Reasoning with LLMs as Programmers0
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?0
Towards Visual Dialog for Radiology0
Toward Unsupervised Realistic Visual Question Answering0
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images0
Training Recurrent Answering Units with Joint Loss Minimization for VQA0
Transferable Adversarial Attacks on Black-Box Vision-Language Models0
Transformers in Vision: A Survey0
Transform-Retrieve-Generate: Natural Language-Centric Outside-Knowledge Visual Question Answering0
Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering0
TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba0
TraveLLaMA: Facilitating Multi-modal Large Language Models to Understand Urban Scenes and Provide Travel Assistance0
Tree Memory Networks for Modelling Long-term Temporal Dependencies0
Triplet-Aware Scene Graph Embeddings0
Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis0
TrojVLM: Backdoor Attack Against Vision Language Models0
TRRNet: Tiered Relation Reasoning for Compositional Visual Question Answering0
TruthLens:A Training-Free Paradigm for DeepFake Detection0
Two can play this Game: Visual Dialog with Discriminative Question Generation and Answering0
TxT: Crossmodal End-to-End Learning with Transformers0
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training0
U-CAM: Visual Explanation using Uncertainty based Class Activation Maps0
SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge0
UFO: A UniFied TransfOrmer for Vision-Language Representation Learning0
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering0
Unanswerable Questions about Images and Texts0
Uncertainty-based Visual Question Answering: Estimating Semantic Inconsistency between Image and Knowledge Base0
Uncertainty-based Visual Question Answering: Estimating Semantic Inconsistency between Image and Knowledge Base0
Uncovering Bias in Large Vision-Language Models with Counterfactuals0
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals0
Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning0
Understanding Attention for Vision-and-Language Tasks0
Understanding Complexity in VideoQA via Visual Program Generation0
Understanding in Artificial Intelligence0
Understanding Information Storage and Transfer in Multi-modal Large Language Models0
Understanding Knowledge Gaps in Visual Question Answering: Implications for Gap Identification and Testing0
Understanding the Role of Scene Graphs in Visual Question Answering0
UnICLAM:Contrastive Representation Learning with Adversarial Masking for Unified and Interpretable Medical Vision Question Answering0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training0
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
UNITER: Learning UNiversal Image-TExt Representations0
Un jeu de données pour répondre à des questions visuelles à propos d’entités nommées en utilisant des bases de connaissances (ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named Entities)0
Unleashing the Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario0
Unshuffling Data for Improved Generalization0
Unshuffling Data for Improved Generalization in Visual Question Answering0
Unsupervised Keyword Extraction for Full-sentence VQA0
Show:102550
← PrevPage 32 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified