| GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation | Oct 15, 2024 | Explainable RecommendationLanguage Modelling | CodeCode Available | 1 |
| AlphaLoRA: Assigning LoRA Experts Based on Layer Training Quality | Oct 14, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 1 |
| Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models | Oct 14, 2024 | Federated LearningMixture-of-Experts | CodeCode Available | 1 |
| Retraining-Free Merging of Sparse MoE via Hierarchical Clustering | Oct 11, 2024 | ClusteringLanguage Modeling | CodeCode Available | 1 |
| Efficient Dictionary Learning with Switch Sparse Autoencoders | Oct 10, 2024 | Dictionary LearningMixture-of-Experts | CodeCode Available | 1 |
| Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild | Oct 7, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices | Oct 3, 2024 | Mixture-of-Experts | CodeCode Available | 1 |
| A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction | Sep 26, 2024 | Mixture-of-ExpertsPrediction | CodeCode Available | 1 |
| Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE | Sep 26, 2024 | image-classificationImage Classification | CodeCode Available | 1 |
| LOLA -- An Open-Source Massively Multilingual Large Language Model | Sep 17, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |