SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 801850 of 2177 papers

TitleStatusHype
Attention on Attention: Architectures for Visual Question Answering (VQA)Code0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional GeneralizationCode0
Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question AnsweringCode0
Neural Module NetworksCode0
Query and Attention Augmentation for Knowledge-Based Explainable ReasoningCode0
Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language UnderstandingCode0
Adapting Lightweight Vision Language Models for Radiological Visual Question AnsweringCode0
NAAQA: A Neural Architecture for Acoustic Question AnsweringCode0
No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only MemoryCode0
MUREL: Multimodal Relational Reasoning for Visual Question AnsweringCode0
Compositionality as Lexical SymmetryCode0
Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMsCode0
Compositional Image-Text Matching and Retrieval by Grounding EntitiesCode0
MUTAN: Multimodal Tucker Fusion for Visual Question AnsweringCode0
Generalizing Visual Question Answering from Synthetic to Human-Written Questions via a Chain of QA with a Large Language ModelCode0
Targeted Visual Prompting for Medical Visual Question AnsweringCode0
Multi-Sourced Compositional Generalization in Visual Question AnsweringCode0
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
General Greedy De-bias LearningCode0
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question AnsweringCode0
IIU: Independent Inference Units for Knowledge-based Visual Question AnsweringCode0
Compact Trilinear Interaction for Visual Question AnsweringCode0
Multimodal Residual Learning for Visual QACode0
CommVQA: Situating Visual Question Answering in Communicative ContextsCode0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
COLUMBUS: Evaluating COgnitive Lateral Understanding through Multiple-choice reBUSesCode0
IMAD: IMage-Augmented multi-modal DialogueCode0
Game of Sketches: Deep Recurrent Models of Pictionary-style Word GuessingCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
Multimodal Explanations: Justifying Decisions and Pointing to the EvidenceCode0
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question AnsweringCode0
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual GroundingCode0
Multimodal Large Language Models and Tunings: Vision, Language, Sensors, Audio, and BeyondCode0
Fully Authentic Visual Question Answering Dataset from Online CommunitiesCode0
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A PlatformsCode0
Language Models Meet Anomaly Detection for Better Interpretability and GeneralizabilityCode0
Right this way: Can VLMs Guide Us to See More to Answer Questions?Code0
Cognitive Visual Commonsense Reasoning Using Dynamic Working MemoryCode0
Multi-Image Visual Question AnsweringCode0
MQA: Answering the Question via Robotic ManipulationCode0
From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language ModelsCode0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
Modularized Zero-shot VQA with Pre-trained ModelsCode0
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question AnsweringCode0
Co-attending Regions and Detections with Multi-modal Multiplicative Embedding for VQACode0
Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question AnsweringCode0
Modulating early visual processing by languageCode0
MM-PoE: Multiple Choice Reasoning via. Process of Elimination using Multi-Modal ModelsCode0
Focal Visual-Text Attention for Visual Question AnsweringCode0
Show:102550
← PrevPage 17 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified