| Go Wider Instead of Deeper | Jul 25, 2021 | Image ClassificationMixture-of-Experts | CodeCode Available | 1 |
| GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts | Dec 7, 2023 | DiversityGraph Neural Network | CodeCode Available | 1 |
| GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation | Oct 15, 2024 | Explainable RecommendationLanguage Modelling | CodeCode Available | 1 |
| Distribution-aware Forgetting Compensation for Exemplar-Free Lifelong Person Re-identification | Apr 21, 2025 | Exemplar-FreeKnowledge Distillation | CodeCode Available | 1 |
| Graph Sparsification via Mixture of Graphs | May 23, 2024 | Graph LearningMixture-of-Experts | CodeCode Available | 1 |
| Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE | Feb 10, 2025 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Distilling the Knowledge in a Neural Network | Mar 9, 2015 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 1 |
| FreqMoE: Enhancing Time Series Forecasting through Frequency Decomposition Mixture of Experts | Jan 25, 2025 | Mixture-of-ExpertsPrediction | CodeCode Available | 1 |
| DMT-HI: MOE-based Hyperbolic Interpretable Deep Manifold Transformation for Unspervised Dimensionality Reduction | Oct 25, 2024 | Dimensionality ReductionMixture-of-Experts | CodeCode Available | 1 |
| Awaker2.5-VL: Stably Scaling MLLMs with Parameter-Efficient Mixture of Experts | Nov 16, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | CodeCode Available | 1 |