| MoRE: Unlocking Scalability in Reinforcement Learning for Quadruped Vision-Language-Action Models | Mar 11, 2025 | Large Language ModelMixture-of-Experts | —Unverified | 0 |
| MoE-Loco: Mixture of Experts for Multitask Locomotion | Mar 11, 2025 | Mixture-of-Experts | —Unverified | 0 |
| UniF^2ace: Fine-grained Face Understanding and Generation with Unified Multimodal Models | Mar 11, 2025 | AttributeMixture-of-Experts | —Unverified | 0 |
| Accelerating MoE Model Inference with Expert Sharding | Mar 11, 2025 | DecoderGPU | —Unverified | 0 |
| GM-MoE: Low-Light Enhancement with Gated-Mechanism Mixture-of-Experts | Mar 10, 2025 | 3D ReconstructionAutonomous Driving | —Unverified | 0 |
| A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications | Mar 10, 2025 | Continual LearningMeta-Learning | CodeCode Available | 9 |
| ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration | Mar 10, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference | Mar 10, 2025 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| Swift Hydra: Self-Reinforcing Generative Framework for Anomaly Detection with Multiple Mamba Models | Mar 9, 2025 | Anomaly DetectionMamba | CodeCode Available | 0 |
| MoFE: Mixture of Frozen Experts Architecture | Mar 9, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 |