| Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models | May 24, 2023 | Mixture-of-ExpertsZero-shot Generalization | —Unverified | 0 |
| Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding | May 23, 2023 | Citation PredictionContrastive Learning | —Unverified | 0 |
| Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model | May 23, 2023 | AvgLanguage Modeling | —Unverified | 0 |
| Condensing Multilingual Knowledge with Lightweight Language-Specific Modules | May 23, 2023 | Machine TranslationMixture-of-Experts | CodeCode Available | 0 |
| To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis | May 22, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Lifelong Language Pretraining with Distribution-Specialized Experts | May 20, 2023 | Lifelong learningMixture-of-Experts | —Unverified | 0 |
| Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts | May 12, 2023 | Ensemble LearningMixture-of-Experts | —Unverified | 0 |
| Locking and Quacking: Stacking Bayesian model predictions by log-pooling and superposition | May 12, 2023 | Bayesian InferenceMixture-of-Experts | —Unverified | 0 |
| Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception | May 10, 2023 | Classificationimage-classification | —Unverified | 0 |
| Demystifying Softmax Gating Function in Gaussian Mixture of Experts | May 5, 2023 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |