| Specialized federated learning using a mixture of experts | Oct 5, 2020 | Federated LearningMixture-of-Experts | CodeCode Available | 1 | 5 |
| Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from Mixture-of-Experts | Oct 8, 2022 | Domain GeneralizationKnowledge Distillation | CodeCode Available | 1 | 5 |
| Mixture of Experts Meets Prompt-Based Continual Learning | May 23, 2024 | Continual LearningMixture-of-Experts | CodeCode Available | 1 | 5 |
| MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation | Apr 15, 2022 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 1 | 5 |
| Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution | Mar 27, 2022 | Image Super-ResolutionMixture-of-Experts | CodeCode Available | 1 | 5 |
| Large Multi-modality Model Assisted AI-Generated Image Quality Assessment | Apr 27, 2024 | Image Quality AssessmentMixture-of-Experts | CodeCode Available | 1 | 5 |
| MedCoT: Medical Chain of Thought via Hierarchical Expert | Dec 18, 2024 | DiagnosticMedical Visual Question Answering | CodeCode Available | 1 | 5 |
| MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter | Jun 7, 2024 | CPUGPU | CodeCode Available | 1 | 5 |
| BrainMAP: Learning Multiple Activation Pathways in Brain Networks | Dec 23, 2024 | MambaMixture-of-Experts | CodeCode Available | 1 | 5 |
| Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts | May 30, 2023 | CPUGPU | CodeCode Available | 1 | 5 |