| FuxiMT: Sparsifying Large Language Models for Chinese-Centric Multilingual Machine Translation | May 20, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| StPR: Spatiotemporal Preservation and Routing for Exemplar-Free Video Class-Incremental Learning | May 20, 2025 | class-incremental learningClass Incremental Learning | —Unverified | 0 |
| Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training | May 20, 2025 | AllDomain Generalization | —Unverified | 0 |
| U-SAM: An audio language Model for Unified Speech, Audio, and Music Understanding | May 20, 2025 | cross-modal alignmentLanguage Modeling | CodeCode Available | 1 |
| Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach | May 20, 2025 | Audio-Visual Speech RecognitionMixture-of-Experts | —Unverified | 0 |
| Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training and Inference | May 19, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 1 |
| Model Selection for Gaussian-gated Gaussian Mixture of Experts Using Dendrograms of Mixing Measures | May 19, 2025 | Computational EfficiencyEnsemble Learning | —Unverified | 0 |
| True Zero-Shot Inference of Dynamical Systems Preserving Long-Term Statistics | May 19, 2025 | Mixture-of-ExpertsTime Series | —Unverified | 0 |
| CompeteSMoE -- Statistically Guaranteed Mixture of Experts Training via Competition | May 19, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| Seeing the Unseen: How EMoE Unveils Bias in Text-to-Image Diffusion Models | May 19, 2025 | FairnessMixture-of-Experts | —Unverified | 0 |