SOTAVerified

Mixture-of-Experts

Papers

Showing 7180 of 1312 papers

TitleStatusHype
MoESD: Unveil Speculative Decoding's Potential for Accelerating Sparse MoE0
WINA: Weight Informed Neuron Activation for Accelerating Large Language Model InferenceCode2
FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language ModelsCode1
NEXT: Multi-Grained Mixture of Experts via Text-Modulation for Multi-Modal Object Re-ID0
Mosaic: Data-Free Knowledge Distillation via Mixture-of-Experts for Heterogeneous Distributed EnvironmentsCode0
Integrating Dynamical Systems Learning with Foundational Models: A Meta-Evolutionary AI Framework for Clinical Trials0
RankLLM: A Python Package for Reranking with LLMsCode0
I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-ExpertsCode2
ThanoRA: Task Heterogeneity-Aware Multi-Task Low-Rank AdaptationCode1
On Minimax Estimation of Parameters in Softmax-Contaminated Mixture of Experts0
Show:102550
← PrevPage 8 of 132Next →

No leaderboard results yet.