| Ada-K Routing: Boosting the Efficiency of MoE-based LLMs | Oct 14, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Learning to Ground VLMs without Forgetting | Oct 14, 2024 | DecoderLanguage Modelling | —Unverified | 0 |
| Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Scalable Multi-Domain Adaptation of Language Models using Modular Experts | Oct 14, 2024 | Domain AdaptationGeneral Knowledge | —Unverified | 0 |
| Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models | Oct 14, 2024 | Federated LearningMixture-of-Experts | CodeCode Available | 1 |
| Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts | Oct 14, 2024 | Mixture-of-ExpertsTime Series | CodeCode Available | 5 |
| ContextWIN: Whittle Index Based Mixture-of-Experts Neural Model For Restless Bandits Via Deep RL | Oct 13, 2024 | Decision MakingMixture-of-Experts | —Unverified | 0 |
| MoIN: Mixture of Introvert Experts to Upcycle an LLM | Oct 13, 2024 | GPULanguage Modeling | —Unverified | 0 |
| AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach | Oct 12, 2024 | Mixture-of-ExpertsTask Planning | —Unverified | 0 |