| NeuroMoE: A Transformer-Based Mixture-of-Experts Framework for Multi-Modal Neurological Disorder Classification | Jun 17, 2025 | DiagnosticMixture-of-Experts | —Unverified | 0 |
| GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection Vectors | Jun 17, 2025 | Bilevel OptimizationMixture-of-Experts | CodeCode Available | 0 |
| MoTE: Mixture of Ternary Experts for Memory-efficient Large Multimodal Models | Jun 17, 2025 | Mixture-of-ExpertsQuantization | —Unverified | 0 |
| Load Balancing Mixture of Experts with Similarity Preserving Routers | Jun 16, 2025 | Mixture-of-Experts | —Unverified | 0 |
| EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware Optimization | Jun 16, 2025 | Mixture-of-ExpertsModel Compression | CodeCode Available | 0 |
| Serving Large Language Models on Huawei CloudMatrix384 | Jun 15, 2025 | Mixture-of-ExpertsQuantization | —Unverified | 0 |
| Optimus-3: Towards Generalist Multimodal Minecraft Agents with Scalable Task Experts | Jun 12, 2025 | DiversityMinecraft | —Unverified | 0 |
| GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture | Jun 11, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| MedMoE: Modality-Specialized Mixture of Experts for Medical Vision-Language Understanding | Jun 10, 2025 | DiagnosticMixture-of-Experts | —Unverified | 0 |
| A Two-Phase Deep Learning Framework for Adaptive Time-Stepping in High-Speed Flow Modeling | Jun 9, 2025 | Mixture-of-Experts | —Unverified | 0 |