SOTAVerified

Mixture-of-Experts

Papers

Showing 2650 of 1312 papers

TitleStatusHype
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
Single-Example Learning in a Mixture of GPDMs with Latent Geometries0
Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs0
Scaling Intelligence: Designing Data Centers for Next-Gen Language Models0
MoTE: Mixture of Ternary Experts for Memory-efficient Large Multimodal Models0
Exploring Speaker Diarization with Mixture of Experts0
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning AttentionCode7
Load Balancing Mixture of Experts with Similarity Preserving Routers0
EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware OptimizationCode0
Serving Large Language Models on Huawei CloudMatrix3840
Structural Similarity-Inspired Unfolding for Lightweight Image Super-ResolutionCode1
Optimus-3: Towards Generalist Multimodal Minecraft Agents with Scalable Task Experts0
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture0
MedMoE: Modality-Specialized Mixture of Experts for Medical Vision-Language Understanding0
A Two-Phase Deep Learning Framework for Adaptive Time-Stepping in High-Speed Flow ModelingCode0
M2Restore: Mixture-of-Experts-based Mamba-CNN Fusion Framework for All-in-One Image Restoration0
MIRA: Medical Time Series Foundation Model for Real-World Health Data0
STAMImputer: Spatio-Temporal Attention MoE for Traffic Data ImputationCode0
MoE-MLoRA for Multi-Domain CTR Prediction: Efficient Adaptation with Expert SpecializationCode0
MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing0
Breaking Data Silos: Towards Open and Scalable Mobility Foundation Models via Generative Continual Learning0
SMAR: Soft Modality-Aware Routing Strategy for MoE-based Multimodal Large Language Models Preserving Language Capabilities0
Lifelong Evolution: Collaborative Learning between Large and Small Language Models for Continuous Emergent Fake News Detection0
FlashDMoE: Fast Distributed MoE in a Single KernelCode3
Brain-Like Processing Pathways Form in Models With Heterogeneous Experts0
Show:102550
← PrevPage 2 of 53Next →

No leaderboard results yet.