SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20012050 of 2177 papers

TitleStatusHype
What Large Language Models Bring to Text-rich VQA?0
Improving Users' Mental Model with Attention-directed Counterfactual Edits0
Improving Visual Question Answering by Referring to Generated Paragraph Captions0
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions0
Improving VQA and its Explanations \\ by Comparing Competing Explanations0
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions0
Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks0
A Restricted Visual Turing Test for Deep Scene and Event Understanding0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering0
InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding0
Generative Visual Question Answering0
Generating Triples with Adversarial Networks for Scene Graph Construction0
Generating Rationales in Visual Question Answering0
InfographicVQA0
Inquire, Interact, and Integrate: A Proactive Agent Collaborative Framework for Zero-Shot Multimodal Medical Reasoning0
Instance-Level Trojan Attacks on Visual Question Answering via Adversarial Learning in Neuron Activation Space0
Generating Natural Questions from Images for Multimodal Assistants0
Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge0
Generalized Hadamard-Product Fusion Operators for Visual Question Answering0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs0
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models0
A reinforcement learning approach for VQA validation: an application to diabetic macular edema grading0
Integrating Knowledge and Reasoning in Image Understanding0
Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent0
Interactive Attention AI to translate low light photos to captions for night scene understanding in women safety0
Interactive Visual Task Learning for Robots0
Can Generative AI Support Patients' & Caregivers' Informational Needs? Towards Task-Centric Evaluation Of AI Systems0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model0
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems0
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
Interpretable Counting for Visual Question Answering0
Interpretable Face Anti-Spoofing: Enhancing Generalization with Multimodal Large Language Models0
Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning0
Interpretable Neural Computation for Real-World Compositional Visual Question Answering0
Interpretable Visual Question Answering Referring to Outside Knowledge0
Interpretable Visual Question Answering by Reasoning on Dependency Trees0
Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining0
Interpretable Visual Question Answering via Reasoning Supervision0
Interpretable Visual Reasoning via Probabilistic Formulation under Natural Supervision0
Gender and Racial Bias in Visual Question Answering Datasets0
ArcSin: Adaptive ranged cosine Similarity injected noise for Language-Driven Visual Tasks0
Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool0
Inverse Visual Question Answering with Multi-Level Attentions0
Investigating Biases in Textual Entailment Datasets0
Show:102550
← PrevPage 41 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified