SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 18011850 of 2177 papers

TitleStatusHype
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge InjectionCode0
Federated Document Visual Question Answering: A Pilot StudyCode0
OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence RoboticsCode0
Core Tokensets for Data-efficient Sequential Training of TransformersCode0
Copy-Move Forgery Detection and Question Answering for Remote Sensing ImageCode0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
OmniFusion Technical ReportCode0
Multimodal Residual Learning for Visual QACode0
OmniNet: A unified architecture for multi-modal multi-task learningCode0
Convincing Rationales for Visual Question Answering ReasoningCode0
AdaVQA: Overcoming Language Priors with Adapted Margin Cosine LossCode0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and BeyondCode0
Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal AssistantCode0
Factor Graph AttentionCode0
Continual VQA for Disaster Response SystemsCode0
On Modality Bias Recognition and ReductionCode0
Exploring the Effect of Primitives for Compositional Generalization in Vision-and-LanguageCode0
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
Context-VQA: Towards Context-Aware and Purposeful Visual Question AnsweringCode0
Exploring Modulated Detection Transformer as a Tool for Action Recognition in VideosCode0
Contextual Dropout: An Efficient Sample-Dependent Dropout ModuleCode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Consistency of Compositional Generalization across Multiple LevelsCode0
What's Different between Visual Question Answering for Machine "Understanding" Versus for Accessibility?Code0
Synthesizing Sentiment-Controlled Feedback For Multimodal Text and Image DataCode0
Synthetic Document Question Answering in HungarianCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
Language Models Meet Anomaly Detection for Better Interpretability and GeneralizabilityCode0
Open-Ended Visual Question-AnsweringCode0
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Multi-Image Visual Question AnsweringCode0
MQA: Answering the Question via Robotic ManipulationCode0
Open-Set Knowledge-Based Visual Question Answering with Inference PathsCode0
OpenViVQA: Task, Dataset, and Multimodal Fusion Models for Visual Question Answering in VietnameseCode0
T2I-FineEval: Fine-Grained Compositional Metric for Text-to-Image EvaluationCode0
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel ImagesCode0
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and PromptsCode0
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning BaselinesCode0
TAB-VCR: Tags and Attributes based VCR BaselinesCode0
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible AdapterCode0
OsmLocator: locating overlapping scatter marks with a non-training generative perspectiveCode0
Modulating early visual processing by languageCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
What's in a Question: Using Visual Questions as a Form of SupervisionCode0
Outside Knowledge Conversational Video (OKCV) Dataset -- Dialoguing over VideosCode0
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question AnsweringCode0
Evaluating Attribute Comprehension in Large Vision-Language ModelsCode0
Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question AnsweringCode0
Show:102550
← PrevPage 37 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified