SOTAVerified

Mixture-of-Experts

Papers

Showing 76100 of 1312 papers

TitleStatusHype
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
Decomposing the Neurons: Activation Sparsity via Mixture of Experts for Continual Test Time AdaptationCode2
Monet: Mixture of Monosemantic Experts for TransformersCode2
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU RoutingCode2
Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-ExpertsCode2
MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery DetectionCode2
CNMBERT: A Model for Converting Hanyu Pinyin Abbreviations to Chinese CharactersCode2
ModuleFormer: Modularity Emerges from Mixture-of-ExpertsCode2
Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language ModelsCode2
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet UpcyclingCode2
Mixture of A Million ExpertsCode2
Mixture of Lookup ExpertsCode2
MDFEND: Multi-domain Fake News DetectionCode2
Fast Feedforward NetworksCode2
MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains MoreCode2
MiniDrive: More Efficient Vision-Language Models with Multi-Level 2D Features as Text Tokens for Autonomous DrivingCode2
Mixture of Tokens: Continuous MoE through Cross-Example AggregationCode2
MoEUT: Mixture-of-Experts Universal TransformersCode2
MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision TasksCode2
LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-TrainingCode2
Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family ExpertsCode2
Learning Robust Stereo Matching in the Wild with Selective Mixture-of-ExpertsCode2
LiMoE: Mixture of LiDAR Representation Learners from Automotive ScenesCode2
Learning A Sparse Transformer Network for Effective Image DerainingCode2
A Closer Look into Mixture-of-Experts in Large Language ModelsCode2
Show:102550
← PrevPage 4 of 53Next →

No leaderboard results yet.