| La-SoftMoE CLIP for Unified Physical-Digital Face Attack Detection | Aug 23, 2024 | Mixture-of-Experts | —Unverified | 0 |
| How Lightweight Can A Vision Transformer Be | Jul 25, 2024 | Mixture-of-ExpertsTransfer Learning | —Unverified | 0 |
| Learning Heterogeneous Tissues with Mixture of Experts for Gigapixel Whole Slide Images | Jan 1, 2025 | Mixture-of-Expertswhole slide images | —Unverified | 0 |
| Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts | Nov 23, 2024 | knowledge editingMixture-of-Experts | —Unverified | 0 |
| Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought | May 21, 2025 | ChatbotInstruction Following | —Unverified | 0 |
| Faster MoE LLM Inference for Extremely Large Models | May 6, 2025 | Inference OptimizationMixture-of-Experts | —Unverified | 0 |
| Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition | Oct 23, 2024 | Code GenerationMixture-of-Experts | —Unverified | 0 |
| CoCoAFusE: Beyond Mixtures of Experts via Model Fusion | May 2, 2025 | Mixture-of-ExpertsPhilosophy | —Unverified | 0 |
| Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective | Feb 2, 2023 | GPUMixture-of-Experts | —Unverified | 0 |
| An Unsupervised Domain Adaptation Method for Locating Manipulated Region in partially fake Audio | Jul 11, 2024 | Data AugmentationDiversity | —Unverified | 0 |