SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 901950 of 2177 papers

TitleStatusHype
Interpretable Visual Question Answering via Reasoning Supervision0
Interpretable Visual Reasoning via Probabilistic Formulation under Natural Supervision0
GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided Feedback0
Graph-Structured Representations for Visual Question Answering0
Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool0
Inverse Visual Question Answering with Multi-Level Attentions0
Graph Relation Transformer: Incorporating pairwise object features into the Transformer architecture0
Bilinear Graph Networks for Visual Question Answering0
Analysis of Visual Question Answering Algorithms with attention model0
Graph Neural Networks in Vision-Language Image Understanding: A Survey0
A Unified Framework for Multilingual and Code-Mixed Visual Question Answering0
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
ISAAQ - Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Is GPT-3 all you need for Visual Question Answering in Cultural Heritage?0
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network0
LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing0
It Takes Two to Tango: Towards Theory of AI's Mind0
iVQA: Inverse Visual Question Answering0
GRAM: Global Reasoning for Multi-Page VQA0
GRADE: Quantifying Sample Diversity in Text-to-Image Models0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Linguistically Driven Graph Capsule Network for Visual Question Reasoning0
AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference0
GPT-4V Explorations: Mining Autonomous Driving0
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning0
Joint learning of object graph and relation graph for visual question answering0
Linguistically Routing Capsule Network for Out-of-Distribution Visual Question Answering0
Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention0
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)0
JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems0
Good, Better, Best: Textual Distractors Generation for Multiple-Choice Visual Question Answering via Reinforcement Learning0
Lightweight In-Context Tuning for Multimodal Unified Models0
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
KAnoCLIP: Zero-Shot Anomaly Detection through Knowledge-Driven Prompt Learning and Enhanced Cross-Modal Integration0
Goal-Oriented Semantic Communication for Wireless Visual Question Answering0
Kernel Pooling for Convolutional Neural Networks0
γ-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models0
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
A Multimodal Social Agent0
Knowing Where to Look? Analysis on Attention of Visual Question Answering System0
Knowledge Acquisition for Visual Question Answering via Iterative Querying0
Knowledge-Augmented Language Models Interpreting Structured Chest X-Ray Findings0
Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?0
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering0
GiVE: Guiding Visual Encoder to Perceive Overlooked Information0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Connecting Language and Vision to Actions0
Attentive Explanations: Justifying Decisions and Pointing to the Evidence0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing0
Show:102550
← PrevPage 19 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified