| MicarVLMoE: A Modern Gated Cross-Aligned Vision-Language Mixture of Experts Model for Medical Image Captioning and Report Generation | Apr 29, 2025 | cross-modal alignmentDecoder | CodeCode Available | 0 | 5 |
| Build a Robust QA System with Transformer-based Mixture of Experts | Mar 20, 2022 | Data AugmentationMixture-of-Experts | CodeCode Available | 0 | 5 |
| Embarrassingly Parallel Inference for Gaussian Processes | Feb 27, 2017 | Gaussian ProcessesMixture-of-Experts | CodeCode Available | 0 | 5 |
| Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation | Nov 2, 2021 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts | May 25, 2022 | Mixture-of-ExpertsMulti-Task Learning | CodeCode Available | 0 | 5 |
| Eidetic Learning: an Efficient and Provable Solution to Catastrophic Forgetting | Feb 13, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Manifold-Preserving Transformers are Effective for Short-Long Range Encoding | Oct 22, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 0 | 5 |
| MaskMoE: Boosting Token-Level Learning via Routing Mask in Mixture-of-Experts | Jul 13, 2024 | DiversityMixture-of-Experts | CodeCode Available | 0 | 5 |
| Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts | Jun 8, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 0 | 5 |
| LLM-e Guess: Can LLMs Capabilities Advance Without Hardware Progress? | May 7, 2025 | Large Language ModelMixture-of-Experts | CodeCode Available | 0 | 5 |
| Robust Federated Learning by Mixture of Experts | Apr 23, 2021 | Federated LearningMixture-of-Experts | CodeCode Available | 0 | 5 |
| m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers | Feb 26, 2024 | Knowledge DistillationMixture-of-Experts | CodeCode Available | 0 | 5 |
| RouterKT: Mixture-of-Experts for Knowledge Tracing | Apr 11, 2025 | Knowledge TracingMixture-of-Experts | CodeCode Available | 0 | 5 |
| Efficient and Interpretable Grammatical Error Correction with Mixture of Experts | Oct 30, 2024 | Grammatical Error CorrectionMixture-of-Experts | CodeCode Available | 0 | 5 |
| Effective Approaches to Batch Parallelization for Dynamic Neural Network Architectures | Jul 8, 2017 | Mixture-of-ExpertsQuestion Answering | CodeCode Available | 0 | 5 |
| Lifelong Mixture of Variational Autoencoders | Jul 9, 2021 | Lifelong learningMixture-of-Experts | CodeCode Available | 0 | 5 |
| Learning Mixture-of-Experts for General-Purpose Black-Box Discrete Optimization | May 29, 2024 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| Learning multi-modal generative models with permutation-invariant encoders and tighter variational objectives | Sep 1, 2023 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware Optimization | Jun 16, 2025 | Mixture-of-ExpertsModel Compression | CodeCode Available | 0 | 5 |
| Countering Mainstream Bias via End-to-End Adaptive Local Learning | Apr 13, 2024 | Collaborative FilteringMixture-of-Experts | CodeCode Available | 0 | 5 |
| SEKE: Specialised Experts for Keyword Extraction | Dec 18, 2024 | DescriptiveKeyword Extraction | CodeCode Available | 0 | 5 |
| A multi-scale lithium-ion battery capacity prediction using mixture of experts and patch-based MLP | Mar 26, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing Mechanism | Apr 1, 2025 | Common Sense ReasoningComputational Efficiency | CodeCode Available | 0 | 5 |
| Binary-Integer-Programming Based Algorithm for Expert Load Balancing in Mixture-of-Experts Models | Feb 21, 2025 | Mixture-of-Experts | CodeCode Available | 0 | 5 |
| A Multi-Modal Deep Learning Framework for Pan-Cancer Prognosis | Jan 13, 2025 | Deep LearningMixture-of-Experts | CodeCode Available | 0 | 5 |