SOTAVerified

Mixture-of-Experts

Papers

Showing 10011025 of 1312 papers

TitleStatusHype
Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging0
Efficient Data Driven Mixture-of-Expert Extraction from Trained Networks0
Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation0
Efficient Language Modeling with Sparse all-MLP0
Efficient Large Scale Language Modeling with Mixtures of Experts0
Efficient Large Scale Video Classification0
EfficientLLM: Efficiency in Large Language Models0
Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving0
Efficient Model Agnostic Approach for Implicit Neural Representation Based Arbitrary-Scale Image Super-Resolution0
Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts0
Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping0
Efficient Training of Large-Scale AI Models Through Federated Mixture-of-Experts: A System-Level Approach0
eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference0
ENACT-Heart -- ENsemble-based Assessment Using CNN and Transformer on Heart Sounds0
Enhancing Code-Switching ASR Leveraging Non-Peaky CTC Loss and Deep Language Posterior Injection0
Enhancing Code-Switching Speech Recognition with LID-Based Collaborative Mixture of Experts Model0
Enhancing Generalization in Sparse Mixture of Experts Models: The Case for Increased Expert Activation in Compositional Tasks0
Enhancing Healthcare Recommendation Systems with a Multimodal LLMs-based MOE Architecture0
Enhancing Multimodal Continual Instruction Tuning with BranchLoRA0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
Enhancing the "Immunity" of Mixture-of-Experts Networks for Adversarial Defense0
Ensemble Learning for Large Language Models in Text and Code Generation: A Survey0
EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference0
Evaluating Expert Contributions in a MoE LLM for Quiz-Based Tasks0
EVA: Mixture-of-Experts Semantic Variant Alignment for Compositional Zero-Shot Learning0
Show:102550
← PrevPage 41 of 53Next →

No leaderboard results yet.