| An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training | Jun 29, 2023 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| SkillNet-X: A Multilingual Multitask Model with Sparsely Activated Skills | Jun 28, 2023 | Mixture-of-ExpertsNatural Language Understanding | —Unverified | 0 |
| JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving | Jun 19, 2023 | In-Context LearningLanguage Modeling | —Unverified | 0 |
| Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings | Jun 14, 2023 | DiversityFederated Learning | —Unverified | 0 |
| Attention Weighted Mixture of Experts with Contrastive Learning for Personalized Ranking in E-commerce | Jun 8, 2023 | Contrastive LearningMixture-of-Experts | —Unverified | 0 |
| Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts | Jun 8, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 0 |
| Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking | Jun 1, 2023 | Dialogue State TrackingMixture-of-Experts | —Unverified | 0 |
| Revisiting Hate Speech Benchmarks: From Data Curation to System Deployment | Jun 1, 2023 | BenchmarkingHate Speech Detection | CodeCode Available | 0 |
| RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths | May 29, 2023 | Image GenerationMixture-of-Experts | CodeCode Available | 0 |
| Modeling Task Relationships in Multi-variate Soft Sensor with Balanced Mixture-of-Experts | May 25, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models | May 24, 2023 | Mixture-of-ExpertsZero-shot Generalization | —Unverified | 0 |
| Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding | May 23, 2023 | Citation PredictionContrastive Learning | —Unverified | 0 |
| Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model | May 23, 2023 | AvgLanguage Modeling | —Unverified | 0 |
| Condensing Multilingual Knowledge with Lightweight Language-Specific Modules | May 23, 2023 | Machine TranslationMixture-of-Experts | CodeCode Available | 0 |
| To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis | May 22, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Lifelong Language Pretraining with Distribution-Specialized Experts | May 20, 2023 | Lifelong learningMixture-of-Experts | —Unverified | 0 |
| Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts | May 12, 2023 | Ensemble LearningMixture-of-Experts | —Unverified | 0 |
| Locking and Quacking: Stacking Bayesian model predictions by log-pooling and superposition | May 12, 2023 | Bayesian InferenceMixture-of-Experts | —Unverified | 0 |
| Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception | May 10, 2023 | Classificationimage-classification | —Unverified | 0 |
| Demystifying Softmax Gating Function in Gaussian Mixture of Experts | May 5, 2023 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |
| Steered Mixture-of-Experts Autoencoder Design for Real-Time Image Modelling and Denoising | May 5, 2023 | DecoderDenoising | —Unverified | 0 |
| Towards Being Parameter-Efficient: A Stratified Sparsely Activated Transformer with Dynamic Capacity | May 3, 2023 | Machine TranslationMixture-of-Experts | CodeCode Available | 0 |
| Pipeline MoE: A Flexible MoE Implementation with Pipeline Parallelism | Apr 22, 2023 | AllMixture-of-Experts | —Unverified | 0 |
| Revisiting Single-gated Mixtures of Experts | Apr 11, 2023 | Mixture-of-Experts | —Unverified | 0 |
| FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement | Apr 8, 2023 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| Mixed Regression via Approximate Message Passing | Apr 5, 2023 | DenoisingMixture-of-Experts | —Unverified | 0 |
| Steered Mixture of Experts Regression for Image Denoising with Multi-Model-Inference | Mar 30, 2023 | DenoisingImage Denoising | —Unverified | 0 |
| Information Maximizing Curriculum: A Curriculum-Based Approach for Imitating Diverse Skills | Mar 27, 2023 | Imitation LearningMixture-of-Experts | CodeCode Available | 0 |
| WM-MoE: Weather-aware Multi-scale Mixture-of-Experts for Blind Adverse Weather Removal | Mar 24, 2023 | Autonomous DrivingContrastive Learning | —Unverified | 0 |
| Disguise without Disruption: Utility-Preserving Face De-Identification | Mar 23, 2023 | De-identificationEnsemble Learning | —Unverified | 0 |
| Improving Transformer Performance for French Clinical Notes Classification Using Mixture of Experts on a Limited Dataset | Mar 22, 2023 | Mixture-of-Expertstext-classification | —Unverified | 0 |
| HDformer: A Higher Dimensional Transformer for Diabetes Detection Utilizing Long Range Vascular Signals | Mar 17, 2023 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| MCR-DL: Mix-and-Match Communication Runtime for Deep Learning | Mar 15, 2023 | Deep LearningGPU | —Unverified | 0 |
| Scaling Vision-Language Models with Sparse Mixture of Experts | Mar 13, 2023 | Mixture-of-Experts | —Unverified | 0 |
| A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training | Mar 11, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference | Mar 10, 2023 | CPUDecoder | —Unverified | 0 |
| Improving Expert Specialization in Mixture of Experts | Feb 28, 2023 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| Improved Training of Mixture-of-Experts Language GANs | Feb 23, 2023 | Adversarial TextImage Generation | —Unverified | 0 |
| TMoE-P: Towards the Pareto Optimum for Multivariate Soft Sensors | Feb 21, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Massively Multilingual Shallow Fusion with Large Language Models | Feb 17, 2023 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 |
| Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective | Feb 2, 2023 | GPUMixture-of-Experts | —Unverified | 0 |
| Alternating Updates for Efficient Transformers | Jan 30, 2023 | Mixture-of-Experts | —Unverified | 0 |
| PRUDEX-Compass: Towards Systematic Evaluation of Reinforcement Learning in Financial Markets | Jan 14, 2023 | ManagementMixture-of-Experts | —Unverified | 0 |
| AdaEnsemble: Learning Adaptively Sparse Structured Ensemble Network for Click-Through Rate Prediction | Jan 6, 2023 | Click-Through Rate PredictionMixture-of-Experts | —Unverified | 0 |
| Covariate-guided Bayesian mixture model for multivariate time series | Jan 3, 2023 | Mixture-of-ExpertsTime Series | CodeCode Available | 0 |
| Semantic-Aware Dynamic Parameter for Video Inpainting Transformer | Jan 1, 2023 | Mixture-of-ExpertsVideo Inpainting | —Unverified | 0 |
| Mod-Squad: Designing Mixtures of Experts As Modular Multi-Task Learners | Jan 1, 2023 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts | Jan 1, 2023 | Instance SegmentationMixture-of-Experts | —Unverified | 0 |
| Memory-efficient NLLB-200: Language-specific Expert Pruning of a Massively Multilingual Machine Translation Model | Dec 19, 2022 | GPUMachine Translation | —Unverified | 0 |
| MultiCoder: Multi-Programming-Lingual Pre-Training for Low-Resource Code Completion | Dec 19, 2022 | Code CompletionMixture-of-Experts | —Unverified | 0 |