| Theory of Mixture-of-Experts for Mobile Edge Computing | Dec 20, 2024 | Computational EfficiencyContinual Learning | —Unverified | 0 | 0 |
| Theory on Mixture-of-Experts in Continual Learning | Jun 24, 2024 | Continual LearningMixture-of-Experts | —Unverified | 0 | 0 |
| The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts | May 11, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities | Aug 23, 2024 | Computational EfficiencyInference Optimization | —Unverified | 0 | 0 |
| THOR-MoE: Hierarchical Task-Guided and Context-Responsive Routing for Neural Machine Translation | May 20, 2025 | Machine TranslationMixture-of-Experts | —Unverified | 0 | 0 |
| Time series forecasting with high stakes: A field study of the air cargo industry | Jul 29, 2024 | Decision MakingDemand Forecasting | —Unverified | 0 | 0 |
| Time Tracker: Mixture-of-Experts-Enhanced Foundation Time Series Forecasting Model with Decoupled Training Pipelines | May 21, 2025 | Graph LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters | Oct 18, 2022 | Language ModelingLanguage Modelling | —Unverified | 0 | 0 |
| TMoE-P: Towards the Pareto Optimum for Multivariate Soft Sensors | Feb 21, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| ToMoE: Converting Dense Large Language Models to Mixture-of-Experts through Dynamic Structural Pruning | Jan 25, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Topic Compositional Neural Language Model | Dec 28, 2017 | Language ModelingLanguage Modelling | —Unverified | 0 | 0 |
| To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis | May 22, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks | Sep 24, 2024 | Mixture-of-ExpertsSemantic Communication | —Unverified | 0 | 0 |
| Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head Attention Spiking Transformers | Dec 7, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Towards A Better Metric for Text-to-Video Generation | Jan 15, 2024 | Mixture-of-ExpertsText-to-Video Generation | —Unverified | 0 | 0 |
| Towards an empirical understanding of MoE design choices | Feb 20, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model | May 23, 2023 | AvgLanguage Modeling | —Unverified | 0 | 0 |
| Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts | May 12, 2023 | Ensemble LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Towards Efficient Foundation Model for Zero-shot Amodal Segmentation | Jan 1, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Towards Efficient Single Image Dehazing and Desnowing | Apr 19, 2022 | Image DehazingImage Restoration | —Unverified | 0 | 0 |
| Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts | Feb 7, 2025 | Meta-LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models | Jan 11, 2022 | Mixture-of-ExpertsNetwork Pruning | —Unverified | 0 | 0 |
| Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference | Mar 10, 2023 | CPUDecoder | —Unverified | 0 | 0 |
| Towards Personalized Federated Multi-Scenario Multi-Task Recommendation | Jun 27, 2024 | Federated LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Towards Smart Point-and-Shoot Photography | May 6, 2025 | Mixture-of-ExpertsWord Embeddings | —Unverified | 0 | 0 |