SOTAVerified

Multimodal Large Language Model

Papers

Showing 2650 of 347 papers

TitleStatusHype
Valley2: Exploring Multimodal Models with Scalable Vision-Language DesignCode3
ShapeLLM: Universal 3D Object Understanding for Embodied InteractionCode3
LaVy: Vietnamese Multimodal Large Language ModelCode2
Jailbreaking Attack against Multimodal Large Language ModelCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
LHRS-Bot-Nova: Improved Multimodal Large Language Model for Remote Sensing Vision-Language InterpretationCode2
ChemVLM: Exploring the Power of Multimodal Large Language Models in Chemistry AreaCode2
Referring to Any PersonCode2
Holmes-VAD: Towards Unbiased and Explainable Video Anomaly Detection via Multi-modal LLMCode2
Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal UnderstandingCode2
Introducing Visual Perception Token into Multimodal Large Language ModelCode2
PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse SamplingCode2
Protecting Privacy in Multimodal Large Language Models with MLLMU-BenchCode2
One Token to Seg Them All: Language Instructed Reasoning Segmentation in VideosCode2
Next Token Is Enough: Realistic Image Quality and Aesthetic Scoring with Multimodal Large Language ModelCode2
OpenAD: Open-World Autonomous Driving Benchmark for 3D Object DetectionCode2
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language ModelsCode2
Explore the Limits of Omni-modal Pretraining at ScaleCode2
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially FastCode2
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet UpcyclingCode2
GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question AnsweringCode2
Dimple: Discrete Diffusion Multimodal Large Language Model with Parallel DecodingCode2
MedTsLLM: Leveraging LLMs for Multimodal Medical Time Series AnalysisCode2
ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code GenerationCode2
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You WantCode2
Show:102550
← PrevPage 2 of 14Next →

No leaderboard results yet.