| FreqMoE: Enhancing Time Series Forecasting through Frequency Decomposition Mixture of Experts | Jan 25, 2025 | Mixture-of-ExpertsPrediction | CodeCode Available | 1 | 5 |
| Specialized federated learning using a mixture of experts | Oct 5, 2020 | Federated LearningMixture-of-Experts | CodeCode Available | 1 | 5 |
| Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization | Feb 19, 2024 | Attributecounterfactual | CodeCode Available | 1 | 5 |
| Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts | Jan 18, 2021 | AllMixture-of-Experts | CodeCode Available | 1 | 5 |
| MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | Oct 18, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| BiMediX: Bilingual Medical Mixture of Experts LLM | Feb 20, 2024 | Mixture-of-ExpertsMultiple-choice | CodeCode Available | 1 | 5 |
| Exploring Sparse MoE in GANs for Text-conditioned Image Synthesis | Sep 7, 2023 | Image GenerationMixture-of-Experts | CodeCode Available | 1 | 5 |
| MoGERNN: An Inductive Traffic Predictor for Unobserved Locations in Dynamic Sensing Networks | Jan 21, 2025 | iFunMixture-of-Experts | CodeCode Available | 1 | 5 |
| Frequency-Adaptive Pan-Sharpening with Mixture of Experts | Jan 4, 2024 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| MoExtend: Tuning New Experts for Modality and Task Extension | Aug 7, 2024 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| Multi-Head Mixture-of-Experts | Apr 23, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| EWMoE: An effective model for global weather forecasting with mixture-of-experts | May 9, 2024 | Mixture-of-ExpertsWeather Forecasting | CodeCode Available | 1 | 5 |
| Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts | Aug 22, 2023 | Mixture-of-ExpertsNeRF | CodeCode Available | 1 | 5 |
| Enhancing Fast Feed Forward Networks with Load Balancing and a Master Leaf Node | May 27, 2024 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 1 | 5 |
| Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark | Jun 12, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 | 5 |
| MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning | Jun 16, 2019 | Game of GoImitation Learning | CodeCode Available | 1 | 5 |
| Emergent Modularity in Pre-trained Transformers | May 28, 2023 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language Understanding | May 10, 2025 | DescriptiveEmotion Recognition | CodeCode Available | 1 | 5 |
| MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation | Apr 15, 2022 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 1 | 5 |
| Distribution-aware Forgetting Compensation for Exemplar-Free Lifelong Person Re-identification | Apr 21, 2025 | Exemplar-FreeKnowledge Distillation | CodeCode Available | 1 | 5 |
| XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection | Feb 27, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| MoEDiff-SR: Mixture of Experts-Guided Diffusion Model for Region-Adaptive MRI Super-Resolution | Apr 9, 2025 | Computational EfficiencyDenoising | CodeCode Available | 1 | 5 |
| Distilling the Knowledge in a Neural Network | Mar 9, 2015 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 1 | 5 |
| Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution | Mar 27, 2022 | Image Super-ResolutionMixture-of-Experts | CodeCode Available | 1 | 5 |
| Awaker2.5-VL: Stably Scaling MLLMs with Parameter-Efficient Mixture of Experts | Nov 16, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | CodeCode Available | 1 | 5 |