| GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection Vectors | Jun 17, 2025 | Bilevel OptimizationMixture-of-Experts | CodeCode Available | 0 |
| Single-Example Learning in a Mixture of GPDMs with Latent Geometries | Jun 17, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs | Jun 17, 2025 | Data IntegrationLarge Language Model | —Unverified | 0 |
| Scaling Intelligence: Designing Data Centers for Next-Gen Language Models | Jun 17, 2025 | Mixture-of-Experts | —Unverified | 0 |
| MoTE: Mixture of Ternary Experts for Memory-efficient Large Multimodal Models | Jun 17, 2025 | Mixture-of-ExpertsQuantization | —Unverified | 0 |
| Exploring Speaker Diarization with Mixture of Experts | Jun 17, 2025 | Mixture-of-Expertsspeaker-diarization | —Unverified | 0 |
| MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention | Jun 16, 2025 | Mixture-of-ExpertsReinforcement Learning (RL) | CodeCode Available | 7 |
| Load Balancing Mixture of Experts with Similarity Preserving Routers | Jun 16, 2025 | Mixture-of-Experts | —Unverified | 0 |
| EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware Optimization | Jun 16, 2025 | Mixture-of-ExpertsModel Compression | CodeCode Available | 0 |
| Serving Large Language Models on Huawei CloudMatrix384 | Jun 15, 2025 | Mixture-of-ExpertsQuantization | —Unverified | 0 |
| Structural Similarity-Inspired Unfolding for Lightweight Image Super-Resolution | Jun 13, 2025 | Image Super-ResolutionMixture-of-Experts | CodeCode Available | 1 |
| Optimus-3: Towards Generalist Multimodal Minecraft Agents with Scalable Task Experts | Jun 12, 2025 | DiversityMinecraft | —Unverified | 0 |
| GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture | Jun 11, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| MedMoE: Modality-Specialized Mixture of Experts for Medical Vision-Language Understanding | Jun 10, 2025 | DiagnosticMixture-of-Experts | —Unverified | 0 |
| A Two-Phase Deep Learning Framework for Adaptive Time-Stepping in High-Speed Flow Modeling | Jun 9, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| M2Restore: Mixture-of-Experts-based Mamba-CNN Fusion Framework for All-in-One Image Restoration | Jun 9, 2025 | AllImage Restoration | —Unverified | 0 |
| MIRA: Medical Time Series Foundation Model for Real-World Health Data | Jun 9, 2025 | EthicsMissing Values | —Unverified | 0 |
| STAMImputer: Spatio-Temporal Attention MoE for Traffic Data Imputation | Jun 9, 2025 | Graph AttentionImputation | CodeCode Available | 0 |
| MoE-MLoRA for Multi-Domain CTR Prediction: Efficient Adaptation with Expert Specialization | Jun 9, 2025 | Click-Through Rate PredictionDiversity | CodeCode Available | 0 |
| MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing | Jun 9, 2025 | GPUMixture-of-Experts | —Unverified | 0 |
| Breaking Data Silos: Towards Open and Scalable Mobility Foundation Models via Generative Continual Learning | Jun 7, 2025 | Continual LearningFederated Learning | —Unverified | 0 |
| SMAR: Soft Modality-Aware Routing Strategy for MoE-based Multimodal Large Language Models Preserving Language Capabilities | Jun 6, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Lifelong Evolution: Collaborative Learning between Large and Small Language Models for Continuous Emergent Fake News Detection | Jun 5, 2025 | Fake News Detectionknowledge editing | —Unverified | 0 |
| FlashDMoE: Fast Distributed MoE in a Single Kernel | Jun 5, 2025 | 16kCPU | CodeCode Available | 3 |
| Brain-Like Processing Pathways Form in Models With Heterogeneous Experts | Jun 3, 2025 | FormMixture-of-Experts | —Unverified | 0 |