SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 14011450 of 2177 papers

TitleStatusHype
Robust Visual Reasoning via Language Guided Neural Module Networks0
RS-MoE: Mixture of Experts for Remote Sensing Image Captioning and Visual Question Answering0
RS-RAG: Bridging Remote Sensing Imagery and Comprehensive Knowledge with a Multi-Modal Dataset and Retrieval-Augmented Generation Model0
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data0
RSVQA: Visual Question Answering for Remote Sensing Data0
SAR Strikes Back: A New Hope for RSVQA0
SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering0
SA-VQA: Structured Alignment of Visual and Semantic Representations for Visual Question Answering0
Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension In Biomedical Image Analysis0
Scallop: From Probabilistic Deductive Databases to Scalable Differentiable Reasoning0
Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
Scene Graph Generation with Geometric Context0
Scene Graph Reasoning for Visual Question Answering0
A Comprehensive Survey of Scene Graphs: Generation and Application0
Scene-R1: Video-Grounded Large Language Models for 3D Scene Reasoning without 3D Annotations0
Scene Understanding Enabled Semantic Communication with Open Channel Coding0
SC-ML: Self-supervised Counterfactual Metric Learning for Debiased Visual Question Answering0
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes0
SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs0
Second Place Solution of WSDM2023 Toloka Visual Question Answering Challenge0
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
Seeing and Reasoning with Confidence: Supercharging Multimodal LLMs with an Uncertainty-Aware Agentic Framework0
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding0
Seeing is Deceiving: Exploitation of Visual Pathways in Multi-Modal Language Models0
Seeing is Knowing! Fact-based Visual Question Answering using Knowledge Graph Embeddings0
"See the World, Discover Knowledge": A Chinese Factuality Evaluation for Large Vision Language Models0
SegEQA: Video Segmentation Based Visual Attention for Embodied Question Answering0
Segmentation-guided Attention for Visual Question Answering from Remote Sensing Images0
Segmentation Guided Attention Networks for Visual Question Answering0
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval0
Selectively Answering Visual Questions0
SelfGraphVQA: A Self-Supervised Graph Neural Network for Scene-based Question Answering0
Self-Segregating and Coordinated-Segregating Transformer for Focused Deep Multi-Modular Network for Visual Question Answering0
WeaQA: Weak Supervision via Captions for Visual Question Answering0
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement0
Semantic Aligned Multi-modal Transformer for Vision-LanguageUnderstanding: A Preliminary Study on Visual QA0
Semantic-aware Modular Capsule Routing for Visual Question Answering0
Semantic Composition in Visually Grounded Language Models0
Semantic-enhanced Modality-asymmetric Retrieval for Online E-commerce Search0
Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors0
Sentence Attention Blocks for Answer Grounding0
Separation of Powers: On Segregating Knowledge from Observation in LLM-enabled Knowledge-based Visual Question Answering0
Is the House Ready For Sleeptime? Generating and Evaluating Situational Queries for Embodied Question Answering0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
SHMamba: Structured Hyperbolic State Space Model for Audio-Visual Question Answering0
Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention0
SILC: Improving Vision Language Pretraining with Self-Distillation0
Silkie: Preference Distillation for Large Visual Language Models0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps0
Show:102550
← PrevPage 29 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified