| MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing | Aug 21, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts | Aug 21, 2024 | Federated LearningHeuristic Search | —Unverified | 0 |
| KAN4TSF: Are KAN and KAN-based models Effective for Time Series Forecasting? | Aug 21, 2024 | Mixture-of-ExpertsTime Series | CodeCode Available | 2 |
| Navigating Spatio-Temporal Heterogeneity: A Graph Transformer Approach for Traffic Forecasting | Aug 20, 2024 | AttributeMixture-of-Experts | CodeCode Available | 1 |
| HMoE: Heterogeneous Mixture of Experts for Language Modeling | Aug 20, 2024 | Computational EfficiencyLanguage Modeling | —Unverified | 0 |
| AnyGraph: Graph Foundation Model in the Wild | Aug 20, 2024 | Graph LearningMixture-of-Experts | CodeCode Available | 3 |
| AdapMoE: Adaptive Sensitivity-based Expert Gating and Management for Efficient MoE Inference | Aug 19, 2024 | ManagementMixture-of-Experts | CodeCode Available | 1 |
| A Unified Framework for Iris Anti-Spoofing: Introducing IrisGeneral Dataset and Masked-MoE Method | Aug 19, 2024 | Iris RecognitionMixture-of-Experts | —Unverified | 0 |
| Customizing Language Models with Instance-wise LoRA for Sequential Recommendation | Aug 19, 2024 | Mixture-of-ExpertsMulti-Task Learning | CodeCode Available | 1 |
| FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models | Aug 17, 2024 | Federated LearningMixture-of-Experts | CodeCode Available | 0 |