| Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion | Jun 14, 2024 | Mixture-of-ExpertsMulti-Task Learning | CodeCode Available | 1 |
| DeepUnifiedMom: Unified Time-series Momentum Portfolio Construction via Multi-Task Learning with Multi-Gate Mixture of Experts | Jun 13, 2024 | ManagementMixture-of-Experts | CodeCode Available | 1 |
| Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark | Jun 12, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters | Jun 10, 2024 | Mixture-of-Experts | CodeCode Available | 9 |
| MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter | Jun 7, 2024 | CPUGPU | CodeCode Available | 1 |
| MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks | Jun 7, 2024 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 2 |
| Style Mixture of Experts for Expressive Text-To-Speech Synthesis | Jun 5, 2024 | Mixture-of-ExpertsSpeech Synthesis | —Unverified | 0 |
| Continual Traffic Forecasting via Mixture of Experts | Jun 5, 2024 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| Node-wise Filtering in Graph Neural Networks: A Mixture of Experts Approach | Jun 5, 2024 | Mixture-of-ExpertsNode Classification | —Unverified | 0 |
| Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models | Jun 5, 2024 | Mixture-of-ExpertsTime Series | —Unverified | 0 |