SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 16511700 of 2177 papers

TitleStatusHype
Separation of Powers: On Segregating Knowledge from Observation in LLM-enabled Knowledge-based Visual Question Answering0
Is the House Ready For Sleeptime? Generating and Evaluating Situational Queries for Embodied Question Answering0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
Visual Superordinate Abstraction for Robust Concept Learning0
3D Question Answering0
Can Pre-training help VQA with Lexical Variations?0
SHMamba: Structured Hyperbolic State Space Model for Audio-Visual Question Answering0
Visual TTR - Modelling Visual Question Answering in Type Theory with Records0
Can Open Domain Question Answering Systems Answer Visual Knowledge Questions?0
Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!0
Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention0
ViT3D Alignment of LLaMA3: 3D Medical Image Report Generation0
SILC: Improving Vision Language Pretraining with Self-Distillation0
Silkie: Preference Distillation for Large Visual Language Models0
Generating Question Relevant Captions to Aid Visual Question Answering0
ViUniT: Visual Unit Tests for More Robust Visual Programming0
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis0
Adversarial Representation Learning for Text-to-Image Matching0
Can Large Language Models Unveil the Mysteries? An Exploration of Their Ability to Unlock Information in Complex Scenarios0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps0
SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving0
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models0
SimpsonsVQA: Enhancing Inquiry-Based Learning with a Tailored Dataset0
Can Common VLMs Rival Medical VLMs? Evaluation and Strategic Insights0
SimVQA: Exploring Simulated Environments for Visual Question Answering0
Single-Modal Entropy based Active Learning for Visual Question Answering0
Adversarial Regularization for Visual Question Answering: Strengths, Shortcomings, and Side Effects0
SITE: towards Spatial Intelligence Thorough Evaluation0
Can CLIP Count Stars? An Empirical Study on Quantity Bias in CLIP0
Calibrating Uncertainty Quantification of Multi-Modal LLMs using Grounding0
CAD -- Contextual Multi-modal Alignment for Dynamic AVQA0
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models0
Small Language Model Meets with Reinforced Vision Vocabulary0
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
Adversarial Multimodal Network for Movie Question Answering0
SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM0
SocialGesture: Delving into Multi-person Gesture Understanding0
Bridging the Semantic Gaps: Improving Medical VQA Consistency with LLM-Augmented Question Sets0
VL-BEiT: Generative Vision-Language Pretraining0
Solution for SMART-101 Challenge of CVPR Multi-modal Algorithmic Reasoning Task 20240
Solution for SMART-101 Challenge of ICCV Multi-modal Algorithmic Reasoning Task 20230
Solving Visual Madlibs with Multiple Cues0
Adversarial Attacks Beyond the Image Space0
Sparks of Artificial General Intelligence(AGI) in Semiconductor Material Science: Early Explorations into the Next Frontier of Generative AI-Assisted Electron Micrograph Analysis0
Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs0
Bridge Damage Cause Estimation Using Multiple Images Based on Visual Question Answering0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
Show:102550
← PrevPage 34 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified