| A Large-scale Medical Visual Task Adaptation Benchmark | Apr 19, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation | Apr 17, 2024 | DisentanglementImage Generation | —Unverified | 0 |
| Generative AI Agents with Large Language Model for Satellite Networks via a Mixture of Experts Transmission | Apr 14, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning | Apr 13, 2024 | DiversityMixture-of-Experts | —Unverified | 0 |
| Mixture of Experts Soften the Curse of Dimensionality in Operator Learning | Apr 13, 2024 | Mixture-of-ExpertsOperator learning | —Unverified | 0 |
| Countering Mainstream Bias via End-to-End Adaptive Local Learning | Apr 13, 2024 | Collaborative FilteringMixture-of-Experts | CodeCode Available | 0 |
| Identifying Shopping Intent in Product QA for Proactive Recommendations | Apr 9, 2024 | FrictionMixture-of-Experts | —Unverified | 0 |
| Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models | Apr 8, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| SEER-MoE: Sparse Expert Efficiency through Regularization for Mixture-of-Experts | Apr 7, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Shortcut-connected Expert Parallelism for Accelerating Mixture-of-Experts | Apr 7, 2024 | Mixture-of-Experts | —Unverified | 0 |