SOTAVerified

Mixture-of-Experts

Papers

Showing 451475 of 1312 papers

TitleStatusHype
MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel OptimizationCode0
MoE-I^2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank DecompositionCode0
LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language ModelsCode1
Stereo-Talker: Audio-driven 3D Human Synthesis with Prior-Guided Mixture-of-Experts0
Efficient and Interpretable Grammatical Error Correction with Mixture of ExpertsCode0
Stealing User Prompts from Mixture of Experts0
MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning0
ProMoE: Fast MoE-based LLM Serving using Proactive Caching0
Neural Experts: Mixture of Experts for Implicit Neural Representations0
Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging0
FinTeamExperts: Role Specialized MOEs For Financial Analysis0
Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving0
DMT-HI: MOE-based Hyperbolic Interpretable Deep Manifold Transformation for Unspervised Dimensionality ReductionCode1
Hierarchical Mixture of Experts: Generalizable Learning for High-Level SynthesisCode0
Mixture of Parrots: Experts improve memorization more than reasoning0
Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-DesignCode1
MoMQ: Mixture-of-Experts Enhances Multi-Dialect Query Generation across Relational and Non-Relational Databases0
Robust and Explainable Depression Identification from Speech Using Vowel-Based Ensemble Learning Approaches0
Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition0
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning0
ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference0
Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling0
LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze DatasetCode1
Generalizing Motion Planners with Mixture of Experts for Autonomous DrivingCode3
CartesianMoE: Boosting Knowledge Sharing among Experts via Cartesian Product Routing in Mixture-of-ExpertsCode0
Show:102550
← PrevPage 19 of 53Next →

No leaderboard results yet.