| MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization | Nov 1, 2024 | 8kMixture-of-Experts | CodeCode Available | 0 |
| MoE-I^2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition | Nov 1, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models | Nov 1, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Stereo-Talker: Audio-driven 3D Human Synthesis with Prior-Guided Mixture-of-Experts | Oct 31, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Efficient and Interpretable Grammatical Error Correction with Mixture of Experts | Oct 30, 2024 | Grammatical Error CorrectionMixture-of-Experts | CodeCode Available | 0 |
| MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning | Oct 30, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Stealing User Prompts from Mixture of Experts | Oct 30, 2024 | Mixture-of-Experts | —Unverified | 0 |
| ProMoE: Fast MoE-based LLM Serving using Proactive Caching | Oct 29, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Neural Experts: Mixture of Experts for Implicit Neural Representations | Oct 29, 2024 | Image ReconstructionMixture-of-Experts | —Unverified | 0 |
| Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging | Oct 29, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |