| MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning | Sep 30, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | —Unverified | 0 | 0 |
| MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training | Mar 14, 2024 | In-Context LearningMixture-of-Experts | —Unverified | 0 | 0 |
| MMoE: Robust Spoiler Detection with Multi-modal Information and Domain-aware Mixture-of-Experts | Mar 8, 2024 | Domain GeneralizationMixture-of-Experts | —Unverified | 0 | 0 |
| μ-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts | May 24, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation | Apr 17, 2024 | DisentanglementImage Generation | —Unverified | 0 | 0 |
| MobileFlow: A Multimodal LLM For Mobile GUI Agent | Jul 5, 2024 | Action AnalysisLanguage Modelling | —Unverified | 0 | 0 |
| Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts | Sep 8, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Mod-Adapter: Tuning-Free and Versatile Multi-concept Personalization via Modulation Adapter | May 24, 2025 | Image GenerationMixture-of-Experts | —Unverified | 0 | 0 |
| MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts | Jan 31, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Model Agnostic Combination for Ensemble Learning | Jun 16, 2020 | Ensemble LearningMixture-of-Experts | —Unverified | 0 | 0 |