| MoEC: Mixture of Expert Clusters | Jul 19, 2022 | Machine TranslationMixture-of-Experts | —Unverified | 0 | 0 |
| MoEC: Mixture of Experts Implicit Neural Compression | Dec 3, 2023 | Data CompressionMixture-of-Experts | —Unverified | 0 | 0 |
| MoE-DiffIR: Task-customized Diffusion Priors for Universal Compressed Image Restoration | Jul 15, 2024 | Image RestorationMixture-of-Experts | —Unverified | 0 | 0 |
| MoEfication: Conditional Computation of Transformer Models for Efficient Inference | Nov 16, 2021 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing | Jun 9, 2025 | GPUMixture-of-Experts | —Unverified | 0 | 0 |
| MoE-Gyro: Self-Supervised Over-Range Reconstruction and Denoising for MEMS Gyroscopes | May 27, 2025 | BenchmarkingDenoising | —Unverified | 0 | 0 |
| MoE-Lens: Towards the Hardware Limit of High-Throughput MoE LLM Serving Under Resource Constraints | Apr 12, 2025 | CPUGPU | —Unverified | 0 | 0 |
| MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs | Nov 18, 2024 | Computational EfficiencyCPU | —Unverified | 0 | 0 |
| MoE-Loco: Mixture of Experts for Multitask Locomotion | Mar 11, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models | Feb 20, 2024 | Common Sense ReasoningContrastive Learning | —Unverified | 0 | 0 |