| Higher Layers Need More LoRA Experts | Feb 13, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | Jan 5, 2024 | Arithmetic ReasoningCode Generation | CodeCode Available | 2 |
| Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning | Dec 22, 2023 | Instruction FollowingMixture-of-Experts | CodeCode Available | 2 |
| LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin | Dec 15, 2023 | Language ModellingMixture-of-Experts | CodeCode Available | 2 |
| QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models | Oct 25, 2023 | GPUMixture-of-Experts | CodeCode Available | 2 |
| Mixture of Tokens: Continuous MoE through Cross-Example Aggregation | Oct 24, 2023 | Language ModellingLarge Language Model | CodeCode Available | 2 |
| Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning | Sep 11, 2023 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 2 |
| Fast Feedforward Networks | Aug 28, 2023 | Mixture-of-Experts | CodeCode Available | 2 |
| Motion In-Betweening with Phase Manifolds | Aug 24, 2023 | Mixture-of-Expertsmotion in-betweening | CodeCode Available | 2 |
| TaskExpert: Dynamically Assembling Multi-Task Representations with Memorial Mixture-of-Experts | Jul 28, 2023 | Long-range modelingMixture-of-Experts | CodeCode Available | 2 |