SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 651700 of 2177 papers

TitleStatusHype
Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar InstancesCode0
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
BioD2C: A Dual-level Semantic Consistency Constraint Framework for Biomedical VQACode0
Discrete Subgraph Sampling for Interpretable Graph based Visual Question AnsweringCode0
Discovering the Unknown Knowns: Turning Implicit Knowledge in the Dataset into Explicit Training Examples for Visual Question AnsweringCode0
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA ModelsCode0
Open-Set Knowledge-Based Visual Question Answering with Inference PathsCode0
OpenViVQA: Task, Dataset, and Multimodal Fusion Models for Visual Question Answering in VietnameseCode0
Diffusion-Refined VQA Annotations for Semi-Supervised Gaze FollowingCode0
OsmLocator: locating overlapping scatter marks with a non-training generative perspectiveCode0
Outside Knowledge Conversational Video (OKCV) Dataset -- Dialoguing over VideosCode0
Difficult Task Yes but Simple Task No: Unveiling the Laziness in Multimodal LLMsCode0
Differential Attention for Visual Question AnsweringCode0
Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
Did the Model Understand the Question?Code0
On Modality Bias Recognition and ReductionCode0
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based InferenceCode0
Open-Ended Visual Question-AnsweringCode0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Beyond Raw Videos: Understanding Edited Videos with Large Multimodal ModelCode0
OmniFusion Technical ReportCode0
Object-aware Adaptive-Positivity Learning for Audio-Visual Question AnsweringCode0
OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence RoboticsCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
OmniNet: A unified architecture for multi-modal multi-task learningCode0
P NP, at least in Visual Question AnsweringCode0
Progressive Prompt Detailing for Improved Alignment in Text-to-Image Generative ModelsCode0
An Improved Attention for Visual Question AnsweringCode0
Delving Deeper into Cross-lingual Visual Question AnsweringCode0
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language UnderstandingCode0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
Beyond Bilinear: Generalized Multimodal Factorized High-order Pooling for Visual Question AnsweringCode0
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
Deep Modular Co-Attention Networks for Visual Question AnsweringCode0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
A Neuro-Symbolic ASP Pipeline for Visual Question AnsweringCode0
NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional GeneralizationCode0
Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMsCode0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
MUTAN: Multimodal Tucker Fusion for Visual Question AnsweringCode0
Declarative Knowledge Distillation from Large Language Models for Visual Question Answering DatasetsCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Neural Module NetworksCode0
Benchmarking Vision-Language Contrastive Methods for Medical Representation LearningCode0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question AnsweringCode0
Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answeringCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
Show:102550
← PrevPage 14 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified