| Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking | Jun 1, 2023 | Dialogue State TrackingMixture-of-Experts | —Unverified | 0 |
| Double Deep Q-Learning in Opponent Modeling | Nov 24, 2022 | Mixture-of-ExpertsQ-Learning | —Unverified | 0 |
| Double-Stage Feature-Level Clustering-Based Mixture of Experts Framework | Mar 12, 2025 | ClusteringDiversity | —Unverified | 0 |
| Double-Wing Mixture of Experts for Streaming Recommendations | Sep 14, 2020 | Ensemble LearningMixture-of-Experts | —Unverified | 0 |
| DriveMoE: Mixture-of-Experts for Vision-Language-Action Model in End-to-End Autonomous Driving | May 22, 2025 | Autonomous DrivingBench2Drive | —Unverified | 0 |
| Dropout Regularization in Hierarchical Mixture of Experts | Dec 25, 2018 | Mixture-of-Experts | —Unverified | 0 |
| Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization | Feb 26, 2025 | Mixture-of-Experts | —Unverified | 0 |
| DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs | Feb 18, 2025 | Computational EfficiencyLanguage Modeling | —Unverified | 0 |
| DualComp: End-to-End Learning of a Unified Dual-Modality Lossless Compressor | May 22, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Duplex: A Device for Large Language Models with Mixture of Experts, Grouped Query Attention, and Continuous Batching | Sep 2, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning | Jan 25, 2025 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing | Oct 2, 2024 | Image GenerationMixture-of-Experts | —Unverified | 0 |
| ECG-EmotionNet: Nested Mixture of Expert (NMoE) Adaptation of ECG-Foundation Model for Driver Emotion Recognition | Mar 3, 2025 | Autonomous DrivingComputational Efficiency | —Unverified | 0 |
| Edge-Aware Autoencoder Design for Real-Time Mixture-of-Experts Image Compression | Jul 25, 2022 | DenoisingImage Compression | —Unverified | 0 |
| EEGMamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification | Jul 20, 2024 | EEGElectroencephalogram (EEG) | —Unverified | 0 |
| Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging | Oct 29, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| Efficient Data Driven Mixture-of-Expert Extraction from Trained Networks | May 21, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation | Dec 27, 2023 | Image RestorationMixture-of-Experts | —Unverified | 0 |
| Efficient Language Modeling with Sparse all-MLP | Mar 14, 2022 | AllCommon Sense Reasoning | —Unverified | 0 |
| Efficient Large Scale Language Modeling with Mixtures of Experts | Dec 20, 2021 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Efficient Large Scale Video Classification | May 22, 2015 | ClassificationGeneral Classification | —Unverified | 0 |
| EfficientLLM: Efficiency in Large Language Models | May 20, 2025 | Mixture-of-ExpertsQuantization | —Unverified | 0 |
| Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving | Oct 28, 2024 | Autonomous DrivingMixture-of-Experts | —Unverified | 0 |
| Efficient Model Agnostic Approach for Implicit Neural Representation Based Arbitrary-Scale Image Super-Resolution | Nov 20, 2023 | Computational EfficiencyDecoder | —Unverified | 0 |
| Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts | Mar 29, 2022 | DecoderMixture-of-Experts | —Unverified | 0 |
| Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping | Oct 3, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Efficient Training of Large-Scale AI Models Through Federated Mixture-of-Experts: A System-Level Approach | Jul 8, 2025 | Edge-computingFederated Learning | —Unverified | 0 |
| eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference | Mar 10, 2025 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| ENACT-Heart -- ENsemble-based Assessment Using CNN and Transformer on Heart Sounds | Feb 24, 2025 | DiagnosticMixture-of-Experts | —Unverified | 0 |
| Enhancing Code-Switching ASR Leveraging Non-Peaky CTC Loss and Deep Language Posterior Injection | Nov 26, 2024 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 |
| Enhancing Code-Switching Speech Recognition with LID-Based Collaborative Mixture of Experts Model | Sep 3, 2024 | Language IdentificationMixture-of-Experts | —Unverified | 0 |
| Enhancing Generalization in Sparse Mixture of Experts Models: The Case for Increased Expert Activation in Compositional Tasks | Oct 17, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Enhancing Healthcare Recommendation Systems with a Multimodal LLMs-based MOE Architecture | Dec 16, 2024 | Mixture-of-ExpertsRecommendation Systems | —Unverified | 0 |
| Enhancing Multimodal Continual Instruction Tuning with BranchLoRA | May 31, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning | Mar 26, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 |
| Enhancing the "Immunity" of Mixture-of-Experts Networks for Adversarial Defense | Feb 29, 2024 | Adversarial DefenseAdversarial Robustness | —Unverified | 0 |
| Ensemble Learning for Large Language Models in Text and Code Generation: A Survey | Mar 13, 2025 | Code GenerationEnsemble Learning | —Unverified | 0 |
| EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference | Oct 16, 2024 | Computational EfficiencyLarge Language Model | —Unverified | 0 |
| Evaluating Expert Contributions in a MoE LLM for Quiz-Based Tasks | Feb 24, 2025 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| EVA: Mixture-of-Experts Semantic Variant Alignment for Compositional Zero-Shot Learning | Jun 26, 2025 | Compositional Zero-Shot LearningMixture-of-Experts | —Unverified | 0 |
| EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE | Aug 23, 2023 | Image-text matchingImage-text Retrieval | —Unverified | 0 |
| Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models | Feb 18, 2025 | Knowledge DistillationMixture-of-Experts | —Unverified | 0 |
| Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs | Mar 7, 2025 | Knowledge GraphsMixture-of-Experts | —Unverified | 0 |
| Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM | Mar 22, 2025 | Code GenerationMixture-of-Experts | —Unverified | 0 |
| EvidenceMoE: A Physics-Guided Mixture-of-Experts with Evidential Critics for Advancing Fluorescence Light Detection and Ranging in Scattering Media | May 23, 2025 | Depth EstimationMixture-of-Experts | —Unverified | 0 |
| EVLM: An Efficient Vision-Language Model for Visual Understanding | Jul 19, 2024 | Image CaptioningLanguage Modeling | —Unverified | 0 |
| EvoMoE: Expert Evolution in Mixture of Experts for Multimodal Large Language Models | May 28, 2025 | Mixture-of-ExpertsMME | —Unverified | 0 |
| Expert Aggregation for Financial Forecasting | Nov 25, 2021 | BIG-bench Machine LearningMixture-of-Experts | —Unverified | 0 |
| ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference | Oct 23, 2024 | Computational EfficiencyCPU | —Unverified | 0 |
| Expert Race: A Flexible Routing Strategy for Scaling Diffusion Transformer with Mixture of Experts | Mar 20, 2025 | Mixture-of-Experts | —Unverified | 0 |