| A Review of Sparse Expert Models in Deep Learning | Sep 4, 2022 | Deep LearningMixture-of-Experts | —Unverified | 0 |
| HMoE: Heterogeneous Mixture of Experts for Language Modeling | Aug 20, 2024 | Computational EfficiencyLanguage Modeling | —Unverified | 0 |
| FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs | Aug 16, 2023 | GPUMixture-of-Experts | —Unverified | 0 |
| Complexity Experts are Task-Discriminative Learners for Any Image Restoration | Nov 27, 2024 | AttributeBlind All-in-One Image Restoration | —Unverified | 0 |
| Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations | Apr 8, 2025 | Instruction FollowingMixture-of-Experts | —Unverified | 0 |
| Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models | Jun 5, 2024 | Mixture-of-ExpertsTime Series | —Unverified | 0 |
| A Review of DeepSeek Models' Key Innovative Techniques | Mar 14, 2025 | Mixture-of-Expertsreinforcement-learning | —Unverified | 0 |
| AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts | Jan 1, 2023 | Instance SegmentationMixture-of-Experts | —Unverified | 0 |
| HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization | Nov 15, 2022 | Domain GeneralizationMixture-of-Experts | —Unverified | 0 |
| FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement | Apr 8, 2023 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| FloE: On-the-Fly MoE Inference on Memory-constrained GPU | May 9, 2025 | CPUGPU | —Unverified | 0 |
| fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving | Feb 7, 2025 | CPUGPU | —Unverified | 0 |
| FMT:A Multimodal Pneumonia Detection Model Based on Stacking MOE Framework | Mar 7, 2025 | DiagnosticMedical Image Analysis | —Unverified | 0 |
| ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation | May 28, 2025 | Contact-rich ManipulationMixture-of-Experts | —Unverified | 0 |
| Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models | Apr 9, 2025 | Instruction FollowingMathematical Problem-Solving | —Unverified | 0 |
| FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers | May 11, 2025 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Affect in Tweets Using Experts Model | Mar 20, 2019 | Mixture-of-Expertsmodel | —Unverified | 0 |
| Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings | Jun 14, 2023 | DiversityFederated Learning | —Unverified | 0 |
| Fresh-CL: Feature Realignment through Experts on Hypersphere in Continual Learning | Jan 4, 2025 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape | Dec 18, 2023 | Mixture-of-Experts | —Unverified | 0 |
| ContextWIN: Whittle Index Based Mixture-of-Experts Neural Model For Restless Bandits Via Deep RL | Oct 13, 2024 | Decision MakingMixture-of-Experts | —Unverified | 0 |
| FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models | Jan 18, 2025 | GPUMixture-of-Experts | —Unverified | 0 |
| Continual Learning Using Task Conditional Neural Networks | Sep 29, 2021 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| Full-Precision Free Binary Graph Neural Networks | Sep 29, 2021 | Graph Neural NetworkMixture-of-Experts | —Unverified | 0 |
| FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts | Aug 21, 2024 | Federated LearningHeuristic Search | —Unverified | 0 |