| Mixture of Experts Meets Prompt-Based Continual Learning | May 23, 2024 | Continual LearningMixture-of-Experts | CodeCode Available | 1 |
| Graph Sparsification via Mixture of Graphs | May 23, 2024 | Graph LearningMixture-of-Experts | CodeCode Available | 1 |
| Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models | May 23, 2024 | Mixture-of-ExpertsVisual Question Answering | CodeCode Available | 2 |
| Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast | May 23, 2024 | Computational EfficiencyGSM8K | CodeCode Available | 1 |
| Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts | May 22, 2024 | Mixture-of-Experts | —Unverified | 0 |
| xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token | May 22, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 2 |
| DirectMultiStep: Direct Route Generation for Multi-Step Retrosynthesis | May 22, 2024 | DiversityMixture-of-Experts | CodeCode Available | 1 |
| Ensemble and Mixture-of-Experts DeepONets For Operator Learning | May 20, 2024 | Mixture-of-ExpertsOperator learning | CodeCode Available | 0 |
| MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models | May 19, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 1 |
| Learning More Generalized Experts by Merging Experts in Mixture-of-Experts | May 19, 2024 | Incremental LearningMixture-of-Experts | —Unverified | 0 |