| P-Tailor: Customizing Personality Traits for Language Models via Mixture of Specialized LoRA Experts | Jun 18, 2024 | Mixture-of-Experts | —Unverified | 0 |
| GW-MoE: Resolving Uncertainty in MoE Router with Global Workspace Theory | Jun 18, 2024 | Code GenerationMathematical Problem-Solving | CodeCode Available | 0 |
| Variational Distillation of Diffusion Policies into Mixture of Experts | Jun 18, 2024 | DenoisingMixture-of-Experts | —Unverified | 0 |
| Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts | Jun 18, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 5 |
| Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to Address Shortcut Shifts in Natural Language Understanding | Jun 17, 2024 | Mixture-of-ExpertsNatural Language Understanding | CodeCode Available | 0 |
| Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts | Jun 17, 2024 | Mixture-of-Experts | CodeCode Available | 1 |
| DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence | Jun 17, 2024 | 16kLanguage Modeling | CodeCode Available | 9 |
| Graph Knowledge Distillation to Mixture of Experts | Jun 17, 2024 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 0 |
| MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts | Jun 17, 2024 | HallucinationMixture-of-Experts | CodeCode Available | 1 |
| Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction | Jun 14, 2024 | Mixture-of-ExpertsPrediction | —Unverified | 0 |