| Few-Shot and Continual Learning with Attentive Independent Mechanisms | Jul 29, 2021 | Continual LearningFew-Shot Learning | CodeCode Available | 1 | 5 |
| Awaker2.5-VL: Stably Scaling MLLMs with Parameter-Efficient Mixture of Experts | Nov 16, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | CodeCode Available | 1 | 5 |
| DirectMultiStep: Direct Route Generation for Multi-Step Retrosynthesis | May 22, 2024 | DiversityMixture-of-Experts | CodeCode Available | 1 | 5 |
| Specialized federated learning using a mixture of experts | Oct 5, 2020 | Federated LearningMixture-of-Experts | CodeCode Available | 1 | 5 |
| Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training and Inference | May 19, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 1 | 5 |
| Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models | Mar 2, 2022 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation | Oct 14, 2022 | CPUMachine Translation | CodeCode Available | 1 | 5 |
| Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution | Mar 27, 2022 | Image Super-ResolutionMixture-of-Experts | CodeCode Available | 1 | 5 |
| Norface: Improving Facial Expression Analysis by Identity Normalization | Jul 22, 2024 | ClassificationEmotion Recognition | CodeCode Available | 1 | 5 |
| Sequence-level Semantic Representation Fusion for Recommender Systems | Feb 28, 2024 | Mixture-of-ExpertsRecommendation Systems | CodeCode Available | 1 | 5 |
| Exploring Sparse MoE in GANs for Text-conditioned Image Synthesis | Sep 7, 2023 | Image GenerationMixture-of-Experts | CodeCode Available | 1 | 5 |
| Navigating Spatio-Temporal Heterogeneity: A Graph Transformer Approach for Traffic Forecasting | Aug 20, 2024 | AttributeMixture-of-Experts | CodeCode Available | 1 | 5 |
| Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs | Jul 1, 2024 | GPUMixture-of-Experts | CodeCode Available | 1 | 5 |
| Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters | Feb 1, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 1 | 5 |
| Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark | Jun 12, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 | 5 |
| EWMoE: An effective model for global weather forecasting with mixture-of-experts | May 9, 2024 | Mixture-of-ExpertsWeather Forecasting | CodeCode Available | 1 | 5 |
| FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing | Dec 22, 2023 | Mixture-of-ExpertsMotion Generation | CodeCode Available | 1 | 5 |
| EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate | Dec 29, 2021 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Dense Backpropagation Improves Training for Sparse Mixture-of-Experts | Apr 16, 2025 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference | Jan 16, 2024 | GPUMixture-of-Experts | CodeCode Available | 1 | 5 |
| MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design | May 9, 2025 | Mixture-of-ExpertsQuantization | CodeCode Available | 1 | 5 |
| MLP Fusion: Towards Efficient Fine-tuning of Dense and Mixture-of-Experts Language Models | Jul 18, 2023 | Language ModellingMixture-of-Experts | CodeCode Available | 1 | 5 |
| Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild | Apr 2, 2023 | Image Quality AssessmentMixture-of-Experts | CodeCode Available | 1 | 5 |
| Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers | Mar 2, 2023 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| Sparse Universal Transformer | Oct 11, 2023 | Mixture-of-Experts | CodeCode Available | 1 | 5 |
| Multi-Source Domain Adaptation with Mixture of Experts | Sep 7, 2018 | Domain AdaptationMixture-of-Experts | CodeCode Available | 0 | 5 |
| Multimodal Fusion Strategies for Mapping Biophysical Landscape Features | Oct 7, 2024 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Multimodal Cultural Safety: Evaluation Frameworks and Alignment Strategies | May 20, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale | Jan 14, 2022 | DecoderMixture-of-Experts | CodeCode Available | 0 | 5 |
| Multi-modal Collaborative Optimization and Expansion Network for Event-assisted Single-eye Expression Recognition | May 17, 2025 | Deep AttentionMamba | CodeCode Available | 0 | 5 |
| Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs | Oct 18, 2023 | Contrastive LearningEntity Typing | CodeCode Available | 0 | 5 |
| MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations | Jul 10, 2024 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Mosaic: Data-Free Knowledge Distillation via Mixture-of-Experts for Heterogeneous Distributed Environments | May 26, 2025 | Data-free Knowledge DistillationFederated Learning | CodeCode Available | 0 | 5 |
| A Two-Phase Deep Learning Framework for Adaptive Time-Stepping in High-Speed Flow Modeling | Jun 9, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing | Oct 10, 2024 | image-classificationImage Classification | CodeCode Available | 0 | 5 |
| MoLEx: Mixture of Layer Experts for Finetuning with Sparse Upcycling | Mar 14, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 0 | 5 |
| Adaptive 3D descattering with a dynamic synthesis network | Jul 1, 2021 | DenoisingMixture-of-Experts | CodeCode Available | 0 | 5 |
| Mol-MoE: Training Preference-Guided Routers for Molecule Generation | Feb 8, 2025 | BenchmarkingDrug Design | CodeCode Available | 0 | 5 |
| MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization | Nov 1, 2024 | 8kMixture-of-Experts | CodeCode Available | 0 | 5 |
| DAOP: Data-Aware Offloading and Predictive Pre-Calculation for Efficient MoE Inference | Dec 16, 2024 | CPUGPU | CodeCode Available | 0 | 5 |
| DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts | Nov 5, 2024 | Mixture-of-ExpertsSensitivity | CodeCode Available | 0 | 5 |
| MOoSE: Multi-Orientation Sharing Experts for Open-set Scene Text Recognition | Jul 26, 2024 | Mixture-of-ExpertsScene Text Recognition | CodeCode Available | 0 | 5 |
| A Bird's-eye View of Reranking: from List Level to Page Level | Nov 17, 2022 | Mixture-of-ExpertsRecommendation Systems | CodeCode Available | 0 | 5 |
| A Teacher Is Worth A Million Instructions | Jun 27, 2024 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| MoE-MLoRA for Multi-Domain CTR Prediction: Efficient Adaptation with Expert Specialization | Jun 9, 2025 | Click-Through Rate PredictionDiversity | CodeCode Available | 0 | 5 |
| A Survey on Prompt Tuning | Jul 8, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 0 | 5 |
| Covariate-guided Bayesian mixture model for multivariate time series | Jan 3, 2023 | Mixture-of-ExpertsTime Series | CodeCode Available | 0 | 5 |
| MoRE-Brain: Routed Mixture of Experts for Interpretable and Generalizable Cross-Subject fMRI Visual Decoding | May 21, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Countering Mainstream Bias via End-to-End Adaptive Local Learning | Apr 13, 2024 | Collaborative FilteringMixture-of-Experts | CodeCode Available | 0 | 5 |
| Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts | Feb 23, 2024 | Mixture-of-Experts | CodeCode Available | 0 | 5 |