SOTAVerified

Multimodal Large Language Model

Papers

Showing 3140 of 347 papers

TitleStatusHype
Paint by Inpaint: Learning to Add Image Objects by Removing Them FirstCode2
GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question AnsweringCode2
mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic DataCode2
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language ModelsCode2
Next Token Is Enough: Realistic Image Quality and Aesthetic Scoring with Multimodal Large Language ModelCode2
Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal UnderstandingCode2
LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal UnderstandingCode2
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially FastCode2
Explore the Limits of Omni-modal Pretraining at ScaleCode2
LLMGA: Multimodal Large Language Model based Generation AssistantCode2
Show:102550
← PrevPage 4 of 35Next →

No leaderboard results yet.