| OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning | Jan 17, 2025 | Computational EfficiencyDiversity | —Unverified | 0 |
| LLM-Based Routing in Mixture of Experts: A Novel Framework for Trading | Jan 16, 2025 | Mixture-of-ExpertsWorld Knowledge | —Unverified | 0 |
| MiniMax-01: Scaling Foundation Models with Lightning Attention | Jan 14, 2025 | Mixture-of-Experts | CodeCode Available | 7 |
| PSReg: Prior-guided Sparse Mixture of Experts for Point Cloud Registration | Jan 14, 2025 | Mixture-of-ExpertsPoint Cloud Registration | —Unverified | 0 |
| GRAPHMOE: Amplifying Cognitive Depth of Mixture-of-Experts Network via Introducing Self-Rethinking Mechanism | Jan 14, 2025 | Mixture-of-Experts | —Unverified | 0 |
| A Multi-Modal Deep Learning Framework for Pan-Cancer Prognosis | Jan 13, 2025 | Deep LearningMixture-of-Experts | CodeCode Available | 0 |
| Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learning | Jan 12, 2025 | Mixture-of-ExpertsMulti-Task Learning | CodeCode Available | 1 |
| TAMER: A Test-Time Adaptive MoE-Driven Framework for EHR Representation Learning | Jan 10, 2025 | Mixture-of-ExpertsRepresentation Learning | CodeCode Available | 0 |
| Optimizing Distributed Deployment of Mixture-of-Experts Model Inference in Serverless Computing | Jan 9, 2025 | Bayesian OptimizationCPU | —Unverified | 0 |
| mFabric: An Efficient and Scalable Fabric for Mixture-of-Experts Training | Jan 7, 2025 | BlockingGPU | —Unverified | 0 |