| ViMoE: An Empirical Study of Designing Vision Mixture-of-Experts | Oct 21, 2024 | image-classificationImage Classification | —Unverified | 0 |
| LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration | Oct 20, 2024 | AllComputational Efficiency | CodeCode Available | 2 |
| MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning | Oct 19, 2024 | Deep Reinforcement LearningMixture-of-Experts | —Unverified | 0 |
| MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | Oct 18, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| ST-MoE-BERT: A Spatial-Temporal Mixture-of-Experts Framework for Long-Term Cross-City Mobility Prediction | Oct 18, 2024 | ClassificationHuman Dynamics | CodeCode Available | 1 |
| Enhancing Generalization in Sparse Mixture of Experts Models: The Case for Increased Expert Activation in Compositional Tasks | Oct 17, 2024 | Mixture-of-Experts | —Unverified | 0 |
| On the Risk of Evidence Pollution for Malicious Social Text Detection in the Era of LLMs | Oct 16, 2024 | Mixture-of-ExpertsText Detection | —Unverified | 0 |
| EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference | Oct 16, 2024 | Computational EfficiencyLarge Language Model | —Unverified | 0 |
| Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts | Oct 16, 2024 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |
| Transformer Layer Injection: A Novel Approach for Efficient Upscaling of Large Language Models | Oct 15, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router | Oct 15, 2024 | Knowledge DistillationLanguage Modeling | —Unverified | 0 |
| MoH: Multi-Head Attention as Mixture-of-Head Attention | Oct 15, 2024 | Mixture-of-Experts | CodeCode Available | 4 |
| Quadratic Gating Functions in Mixture of Experts: A Statistical Insight | Oct 15, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation | Oct 15, 2024 | Explainable RecommendationLanguage Modelling | CodeCode Available | 1 |
| AlphaLoRA: Assigning LoRA Experts Based on Layer Training Quality | Oct 14, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 1 |
| Ada-K Routing: Boosting the Efficiency of MoE-based LLMs | Oct 14, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Learning to Ground VLMs without Forgetting | Oct 14, 2024 | DecoderLanguage Modelling | —Unverified | 0 |
| Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Scalable Multi-Domain Adaptation of Language Models using Modular Experts | Oct 14, 2024 | Domain AdaptationGeneral Knowledge | —Unverified | 0 |
| Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models | Oct 14, 2024 | Federated LearningMixture-of-Experts | CodeCode Available | 1 |
| Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts | Oct 14, 2024 | Mixture-of-ExpertsTime Series | CodeCode Available | 5 |
| ContextWIN: Whittle Index Based Mixture-of-Experts Neural Model For Restless Bandits Via Deep RL | Oct 13, 2024 | Decision MakingMixture-of-Experts | —Unverified | 0 |
| MoIN: Mixture of Introvert Experts to Upcycle an LLM | Oct 13, 2024 | GPULanguage Modeling | —Unverified | 0 |
| AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach | Oct 12, 2024 | Mixture-of-ExpertsTask Planning | —Unverified | 0 |