| Theory on Mixture-of-Experts in Continual Learning | Jun 24, 2024 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts | May 11, 2025 | Mixture-of-Experts | —Unverified | 0 |
| The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities | Aug 23, 2024 | Computational EfficiencyInference Optimization | —Unverified | 0 |
| THOR-MoE: Hierarchical Task-Guided and Context-Responsive Routing for Neural Machine Translation | May 20, 2025 | Machine TranslationMixture-of-Experts | —Unverified | 0 |
| Time series forecasting with high stakes: A field study of the air cargo industry | Jul 29, 2024 | Decision MakingDemand Forecasting | —Unverified | 0 |
| Time Tracker: Mixture-of-Experts-Enhanced Foundation Time Series Forecasting Model with Decoupled Training Pipelines | May 21, 2025 | Graph LearningMixture-of-Experts | —Unverified | 0 |
| Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters | Oct 18, 2022 | Language ModelingLanguage Modelling | —Unverified | 0 |
| TMoE-P: Towards the Pareto Optimum for Multivariate Soft Sensors | Feb 21, 2023 | Mixture-of-Experts | —Unverified | 0 |
| ToMoE: Converting Dense Large Language Models to Mixture-of-Experts through Dynamic Structural Pruning | Jan 25, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Topic Compositional Neural Language Model | Dec 28, 2017 | Language ModelingLanguage Modelling | —Unverified | 0 |