SOTAVerified

Mixture-of-Experts

Papers

Showing 351375 of 1312 papers

TitleStatusHype
Multi-Source Domain Adaptation with Mixture of ExpertsCode0
Multimodal Cultural Safety: Evaluation Frameworks and Alignment StrategiesCode0
Multimodal Fusion Strategies for Mapping Biophysical Landscape FeaturesCode0
AskChart: Universal Chart Understanding through Textual EnhancementCode0
MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from DemonstrationsCode0
MoRE-Brain: Routed Mixture of Experts for Interpretable and Generalizable Cross-Subject fMRI Visual DecodingCode0
MOoSE: Multi-Orientation Sharing Experts for Open-set Scene Text RecognitionCode0
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed RoutingCode0
MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel OptimizationCode0
Mosaic: Data-Free Knowledge Distillation via Mixture-of-Experts for Heterogeneous Distributed EnvironmentsCode0
Multi-modal Collaborative Optimization and Expansion Network for Event-assisted Single-eye Expression RecognitionCode0
ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and Emotion ModelingCode0
Condensing Multilingual Knowledge with Lightweight Language-Specific ModulesCode0
A Gaussian Process-based Streaming Algorithm for Prediction of Time Series With Regimes and OutliersCode0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
A Gated Residual Kolmogorov-Arnold Networks for Mixtures of ExpertsCode0
Completed Feature Disentanglement Learning for Multimodal MRIs AnalysisCode0
Mol-MoE: Training Preference-Guided Routers for Molecule GenerationCode0
CompeteSMoE -- Statistically Guaranteed Mixture of Experts Training via CompetitionCode0
CompeteSMoE - Effective Training of Sparse Mixture of Experts via CompetitionCode0
MoE-MLoRA for Multi-Domain CTR Prediction: Efficient Adaptation with Expert SpecializationCode0
MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors RoutingCode0
CoLA: Collaborative Low-Rank AdaptationCode0
Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-ExpertsCode0
Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language ModelsCode0
Show:102550
← PrevPage 15 of 53Next →

No leaderboard results yet.