| PWC-MoE: Privacy-Aware Wireless Collaborative Mixture of Experts | May 13, 2025 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale | May 13, 2025 | Mixture-of-Experts | —Unverified | 0 |
| UMoE: Unifying Attention and FFN with Shared Experts | May 12, 2025 | Mixture-of-Experts | —Unverified | 0 |
| The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts | May 11, 2025 | Mixture-of-Experts | —Unverified | 0 |
| FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers | May 11, 2025 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Seed1.5-VL Technical Report | May 11, 2025 | Mixture-of-ExpertsMultimodal Reasoning | —Unverified | 0 |
| QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration | May 10, 2025 | GPUMixture-of-Experts | —Unverified | 0 |
| Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language Understanding | May 10, 2025 | DescriptiveEmotion Recognition | CodeCode Available | 1 |
| Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free | May 10, 2025 | AttributeMixture-of-Experts | CodeCode Available | 4 |
| MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design | May 9, 2025 | Mixture-of-ExpertsQuantization | CodeCode Available | 1 |