| Mixture of neural operator experts for learning boundary conditions and model selection | Feb 6, 2025 | Mixture-of-ExpertsModel Selection | —Unverified | 0 | 0 |
| Mixture of Parrots: Experts improve memorization more than reasoning | Oct 24, 2024 | MathMemorization | —Unverified | 0 | 0 |
| Mixture of partially linear experts | May 5, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness | Oct 3, 2023 | GPUMachine Translation | —Unverified | 0 | 0 |
| Mixture of Regression Experts in fMRI Encoding | Nov 26, 2018 | Mixture-of-Expertsregression | —Unverified | 0 | 0 |
| Mixture of Routers | Mar 30, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 | 0 |
| Mixture-of-Shape-Experts (MoSE): End-to-End Shape Dictionary Framework to Prompt SAM for Generalizable Medical Segmentation | Apr 13, 2025 | Dictionary LearningDomain Generalization | —Unverified | 0 | 0 |
| Mixture of Tunable Experts - Behavior Modification of DeepSeek-R1 at Inference Time | Feb 16, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Mixtures of Deep Neural Experts for Automated Speech Scoring | Jun 23, 2021 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 | 0 |
| MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation | Feb 3, 2025 | BenchmarkingFairness | —Unverified | 0 | 0 |
| MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning | Sep 30, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | —Unverified | 0 | 0 |
| MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training | Mar 14, 2024 | In-Context LearningMixture-of-Experts | —Unverified | 0 | 0 |
| MMoE: Robust Spoiler Detection with Multi-modal Information and Domain-aware Mixture-of-Experts | Mar 8, 2024 | Domain GeneralizationMixture-of-Experts | —Unverified | 0 | 0 |
| μ-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts | May 24, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation | Apr 17, 2024 | DisentanglementImage Generation | —Unverified | 0 | 0 |
| MobileFlow: A Multimodal LLM For Mobile GUI Agent | Jul 5, 2024 | Action AnalysisLanguage Modelling | —Unverified | 0 | 0 |
| Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts | Sep 8, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Mod-Adapter: Tuning-Free and Versatile Multi-concept Personalization via Modulation Adapter | May 24, 2025 | Image GenerationMixture-of-Experts | —Unverified | 0 | 0 |
| MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts | Jan 31, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Model Agnostic Combination for Ensemble Learning | Jun 16, 2020 | Ensemble LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Beyond the Typical: Modeling Rare Plausible Patterns in Chemical Reactions by Leveraging Sequential Mixture-of-Experts | Oct 7, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Modeling Task Relationships in Multi-variate Soft Sensor with Balanced Mixture-of-Experts | May 25, 2023 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Model Merging in Pre-training of Large Language Models | May 17, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Model Selection for Gaussian-gated Gaussian Mixture of Experts Using Dendrograms of Mixing Measures | May 19, 2025 | Computational EfficiencyEnsemble Learning | —Unverified | 0 | 0 |
| Mod-Squad: Designing Mixture of Experts As Modular Multi-Task Learners | Dec 15, 2022 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 | 0 |
| Mod-Squad: Designing Mixtures of Experts As Modular Multi-Task Learners | Jan 1, 2023 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 | 0 |
| Modularity Matters: Learning Invariant Relational Reasoning Tasks | Jun 18, 2018 | Mixture-of-ExpertsRelational Reasoning | —Unverified | 0 | 0 |
| MoE-AMC: Enhancing Automatic Modulation Classification Performance Using Mixture-of-Experts | Dec 4, 2023 | ClassificationMixture-of-Experts | —Unverified | 0 | 0 |
| MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation | Jan 16, 2022 | Knowledge DistillationMixture-of-Experts | —Unverified | 0 | 0 |
| MoE-CAP: Benchmarking Cost, Accuracy and Performance of Sparse Mixture-of-Experts Systems | Dec 10, 2024 | BenchmarkingMixture-of-Experts | —Unverified | 0 | 0 |
| MoEC: Mixture of Expert Clusters | Jul 19, 2022 | Machine TranslationMixture-of-Experts | —Unverified | 0 | 0 |
| MoEC: Mixture of Experts Implicit Neural Compression | Dec 3, 2023 | Data CompressionMixture-of-Experts | —Unverified | 0 | 0 |
| MoE-DiffIR: Task-customized Diffusion Priors for Universal Compressed Image Restoration | Jul 15, 2024 | Image RestorationMixture-of-Experts | —Unverified | 0 | 0 |
| MoEfication: Conditional Computation of Transformer Models for Efficient Inference | Nov 16, 2021 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing | Jun 9, 2025 | GPUMixture-of-Experts | —Unverified | 0 | 0 |
| MoE-Gyro: Self-Supervised Over-Range Reconstruction and Denoising for MEMS Gyroscopes | May 27, 2025 | BenchmarkingDenoising | —Unverified | 0 | 0 |
| MoE-Lens: Towards the Hardware Limit of High-Throughput MoE LLM Serving Under Resource Constraints | Apr 12, 2025 | CPUGPU | —Unverified | 0 | 0 |
| MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs | Nov 18, 2024 | Computational EfficiencyCPU | —Unverified | 0 | 0 |
| MoE-Loco: Mixture of Experts for Multitask Locomotion | Mar 11, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models | Feb 20, 2024 | Common Sense ReasoningContrastive Learning | —Unverified | 0 | 0 |
| MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition | Feb 9, 2025 | Gesture RecognitionHand Gesture Recognition | —Unverified | 0 | 0 |
| MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering | Mar 8, 2025 | Answer GenerationMixture-of-Experts | —Unverified | 0 | 0 |
| MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks | Feb 11, 2025 | Fairnessimage-classification | —Unverified | 0 | 0 |
| MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core | Apr 21, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router | Oct 15, 2024 | Knowledge DistillationLanguage Modeling | —Unverified | 0 | 0 |
| MoESD: Mixture of Experts Stable Diffusion to Mitigate Gender Bias | Jun 25, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoESD: Unveil Speculative Decoding's Potential for Accelerating Sparse MoE | May 26, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| MoE-SPNet: A Mixture-of-Experts Scene Parsing Network | Jun 19, 2018 | Mixture-of-ExpertsScene Parsing | —Unverified | 0 | 0 |
| MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees | Sep 25, 2019 | Deep Reinforcement LearningGame of Go | —Unverified | 0 | 0 |
| MoETuner: Optimized Mixture of Expert Serving with Balanced Expert Placement and Token Routing | Feb 10, 2025 | GPUMixture-of-Experts | —Unverified | 0 | 0 |