| LoRA-Switch: Boosting the Efficiency of Dynamic LLM Adapters via System-Algorithm Co-design | May 28, 2024 | Mixture-of-Experts | —Unverified | 0 |
| XTrack: Multimodal Training Boosts RGB-X Video Object Trackers | May 28, 2024 | Inductive BiasMixture-of-Experts | CodeCode Available | 2 |
| Yuan 2.0-M32: Mixture of Experts with Attention Router | May 28, 2024 | ARCMath | CodeCode Available | 2 |
| Enhancing Fast Feed Forward Networks with Load Balancing and a Master Leaf Node | May 27, 2024 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 1 |
| A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts | May 26, 2024 | Binary ClassificationMixture-of-Experts | —Unverified | 0 |
| Decomposing the Neurons: Activation Sparsity via Mixture of Experts for Continual Test Time Adaptation | May 26, 2024 | feature selectionMixture-of-Experts | CodeCode Available | 2 |
| MoEUT: Mixture-of-Experts Universal Transformers | May 25, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 2 |
| Expert-Token Resonance: Redefining MoE Routing through Affinity-Driven Active Selection | May 24, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training | May 23, 2024 | GSM8KMixture-of-Experts | CodeCode Available | 7 |
| Statistical Advantages of Perturbing Cosine Router in Mixture of Experts | May 23, 2024 | Mixture-of-Experts | —Unverified | 0 |