| Scaling Laws for Native Multimodal Models Scaling Laws for Native Multimodal Models | Apr 10, 2025 | Mixture-of-Experts | —Unverified | 0 |
| C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing | Apr 10, 2025 | In-Context LearningMixture-of-Experts | CodeCode Available | 1 |
| Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models | Apr 10, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 0 |
| Adaptive Detection of Fast Moving Celestial Objects Using a Mixture of Experts and Physical-Inspired Neural Network | Apr 10, 2025 | Mixture-of-Expertsobject-detection | —Unverified | 0 |
| Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models | Apr 9, 2025 | Instruction FollowingMathematical Problem-Solving | —Unverified | 0 |
| FedMerge: Federated Personalization via Model Merging | Apr 9, 2025 | Federated LearningMixture-of-Experts | —Unverified | 0 |
| MoEDiff-SR: Mixture of Experts-Guided Diffusion Model for Region-Adaptive MRI Super-Resolution | Apr 9, 2025 | Computational EfficiencyDenoising | CodeCode Available | 1 |
| Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations | Apr 8, 2025 | Instruction FollowingMixture-of-Experts | —Unverified | 0 |
| HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference | Apr 8, 2025 | CPUGPU | CodeCode Available | 2 |
| HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs | Apr 4, 2025 | GPUMixture-of-Experts | —Unverified | 0 |