| GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving | Jul 19, 2025 | Autonomous DrivingBench2Drive | CodeCode Available | 0 |
| R^2MoE: Redundancy-Removal Mixture of Experts for Lifelong Concept Learning | Jul 17, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| Mixture of Experts in Large Language Models | Jul 15, 2025 | DiversityLanguage Modeling | —Unverified | 0 |
| Inter2Former: Dynamic Hybrid Attention for Efficient High-Precision Interactive | Jul 13, 2025 | CPUInteractive Segmentation | —Unverified | 0 |
| KAT-V1: Kwai-AutoThink Technical Report | Jul 11, 2025 | Knowledge DistillationLarge Language Model | —Unverified | 0 |
| MoFE-Time: Mixture of Frequency Domain Experts for Time-Series Forecasting Models | Jul 9, 2025 | Mixture-of-ExpertsTime Series | CodeCode Available | 2 |
| Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate | Jul 8, 2025 | Continual LearningMixture-of-Experts | CodeCode Available | 0 |
| Efficient Training of Large-Scale AI Models Through Federated Mixture-of-Experts: A System-Level Approach | Jul 8, 2025 | Edge-computingFederated Learning | —Unverified | 0 |
| A Survey on Prompt Tuning | Jul 8, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 0 |
| Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis | Jul 8, 2025 | Data AugmentationMixture-of-Experts | —Unverified | 0 |