SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 21012150 of 2177 papers

TitleStatusHype
How Modular Should Neural Module Networks Be for Systematic Generalization?Code0
Targeted Visual Prompting for Medical Visual Question AnsweringCode0
Self Supervision for Attention NetworksCode0
VQA Therapy: Exploring Answer Differences by Visually Grounding AnswersCode0
UMIT: Unifying Medical Imaging Tasks via Vision-Language ModelsCode0
Semantically Equivalent Adversarial Rules for Debugging NLP modelsCode0
Alignment Attention by Matching Key and Query DistributionsCode0
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large ModelsCode0
Deep Modular Co-Attention Networks for Visual Question AnsweringCode0
High-Order Attention Models for Visual Question AnsweringCode0
12-in-1: Multi-Task Vision and Language Representation LearningCode0
Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image RepresentationsCode0
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
Hierarchical Deep Multi-modal Network for Medical Visual Question AnsweringCode0
Visual Question Answering: Datasets, Algorithms, and Future ChallengesCode0
Are VLMs Really BlindCode0
Visual Question Answering From Another Perspective: CLEVR Mental Rotation TestsCode0
ShapeWorld - A new test methodology for multimodal language understandingCode0
ShareGPT4V: Improving Large Multi-Modal Models with Better CaptionsCode0
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded DialogCode0
Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question AnsweringCode0
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa LanguageCode0
HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language ModelsCode0
Uncovering the Full Potential of Visual Grounding Methods in VQACode0
Siamese Tracking with Lingual Object ConstraintsCode0
World to Code: Multi-modal Data Generation via Self-Instructed Compositional Captioning and FilteringCode0
VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic SegmentationCode0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
Sim2Real Transfer for Vision-Based Grasp VerificationCode0
Hallucination Benchmark in Medical Visual Question AnsweringCode0
Simple Baseline for Visual Question AnsweringCode0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
Understanding Attention for Vision-and-Language TasksCode0
Are Vision LLMs Road-Ready? A Comprehensive Benchmark for Safety-Critical Driving Video UnderstandingCode0
A Question-Centric Model for Visual Question Answering in Medical ImagingCode0
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question AnsweringCode0
HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific DomainsCode0
Applying recent advances in Visual Question Answering to Record LinkageCode0
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question AnsweringCode0
Single-Stream Multi-Level Alignment for Vision-Language PretrainingCode0
VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media ReasoningCode0
Hadamard Product for Low-rank Bilinear PoolingCode0
Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge TypesCode0
Grounding Answers for Visual Questions Asked by Visually Impaired PeopleCode0
Grad-CAM: Why did you say that?Code0
Generalizing Visual Question Answering from Synthetic to Human-Written Questions via a Chain of QA with a Large Language ModelCode0
SlotPi: Physics-informed Object-centric Reasoning ModelsCode0
Understanding the World's Museums through Vision-Language ReasoningCode0
Declarative Knowledge Distillation from Large Language Models for Visual Question Answering DatasetsCode0
Show:102550
← PrevPage 43 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified