SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 926950 of 2177 papers

TitleStatusHype
Joint learning of object graph and relation graph for visual question answering0
Linguistically Routing Capsule Network for Out-of-Distribution Visual Question Answering0
Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention0
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)0
JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems0
Good, Better, Best: Textual Distractors Generation for Multiple-Choice Visual Question Answering via Reinforcement Learning0
Lightweight In-Context Tuning for Multimodal Unified Models0
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
KAnoCLIP: Zero-Shot Anomaly Detection through Knowledge-Driven Prompt Learning and Enhanced Cross-Modal Integration0
Goal-Oriented Semantic Communication for Wireless Visual Question Answering0
Kernel Pooling for Convolutional Neural Networks0
γ-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models0
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
A Multimodal Social Agent0
Knowing Where to Look? Analysis on Attention of Visual Question Answering System0
Knowledge Acquisition for Visual Question Answering via Iterative Querying0
Knowledge-Augmented Language Models Interpreting Structured Chest X-Ray Findings0
Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?0
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering0
GiVE: Guiding Visual Encoder to Perceive Overlooked Information0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Connecting Language and Vision to Actions0
Attentive Explanations: Justifying Decisions and Pointing to the Evidence0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing0
Show:102550
← PrevPage 38 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified