| pFedMoE: Data-Level Personalization with Mixture of Experts for Model-Heterogeneous Personalized Federated Learning | Feb 2, 2024 | Federated LearningMixture-of-Experts | CodeCode Available | 0 |
| BlackMamba: Mixture of Experts for State-Space Models | Feb 1, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 3 |
| Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters | Feb 1, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 1 |
| Merging Multi-Task Models via Weight-Ensembling Mixture of Experts | Feb 1, 2024 | Mixture-of-ExpertsTask Arithmetic | CodeCode Available | 1 |
| MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts | Jan 31, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Explainable data-driven modeling via mixture of experts: towards effective blending of grey and black-box models | Jan 30, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Checkmating One, by Using Many: Combining Mixture of Experts with MCTS to Improve in Chess | Jan 30, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models | Jan 29, 2024 | DecoderMixture-of-Experts | CodeCode Available | 5 |
| Routers in Vision Mixture of Experts: An Empirical Study | Jan 29, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| LLaVA-MoLE: Sparse Mixture of LoRA Experts for Mitigating Data Conflicts in Instruction Finetuning MLLMs | Jan 29, 2024 | Language ModellingLarge Language Model | —Unverified | 0 |
| MoE-LLaVA: Mixture of Experts for Large Vision-Language Models | Jan 29, 2024 | HallucinationMixture-of-Experts | CodeCode Available | 7 |
| Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings | Jan 28, 2024 | Contrastive LearningDescriptive | CodeCode Available | 1 |
| Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts? | Jan 25, 2024 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |
| M^3TN: Multi-gate Mixture-of-Experts based Multi-valued Treatment Network for Uplift Modeling | Jan 24, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference | Jan 16, 2024 | GPUMixture-of-Experts | CodeCode Available | 1 |
| Towards A Better Metric for Text-to-Video Generation | Jan 15, 2024 | Mixture-of-ExpertsText-to-Video Generation | —Unverified | 0 |
| Prompt-based mental health screening from social media text | Jan 11, 2024 | Mixture-of-Experts | —Unverified | 0 |
| DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models | Jan 11, 2024 | Language ModellingLarge Language Model | CodeCode Available | 5 |
| Robust Calibration For Improved Weather Prediction Under Distributional Shift | Jan 8, 2024 | Data AugmentationMixture-of-Experts | —Unverified | 0 |
| MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts | Jan 8, 2024 | MambaMixture-of-Experts | CodeCode Available | 3 |
| Mixtral of Experts | Jan 8, 2024 | Code GenerationCommon Sense Reasoning | CodeCode Available | 4 |
| Incorporating Visual Experts to Resolve the Information Loss in Multimodal Large Language Models | Jan 6, 2024 | Instruction FollowingMixture-of-Experts | —Unverified | 0 |
| Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | Jan 5, 2024 | Arithmetic ReasoningCode Generation | CodeCode Available | 2 |
| Subjective and Objective Analysis of Indian Social Media Video Quality | Jan 5, 2024 | Mixture-of-ExpertsVisual Question Answering (VQA) | CodeCode Available | 0 |
| Frequency-Adaptive Pan-Sharpening with Mixture of Experts | Jan 4, 2024 | Mixture-of-Experts | CodeCode Available | 1 |