SOTAVerified

Mixture-of-Experts

Papers

Showing 551600 of 1312 papers

TitleStatusHype
LOLA -- An Open-Source Massively Multilingual Large Language ModelCode1
LPT++: Efficient Training on Mixture of Long-tailed Experts0
Adaptive Segmentation-Based Initialization for Steered Mixture of Experts Image Regression0
Integrating AI's Carbon Footprint into Risk Management Frameworks: Strategies and Tools for Sustainable Compliance in Banking Sector0
MiniDrive: More Efficient Vision-Language Models with Multi-Level 2D Features as Text Tokens for Autonomous DrivingCode2
DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models0
VE: Modeling Multivariate Time Series Correlation with Variate EmbeddingCode0
STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning0
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
Adapted-MoE: Mixture of Experts with Test-Time Adaption for Anomaly Detection0
Interpretable mixture of experts for time series prediction under recurrent and non-recurrent conditions0
Pluralistic Salient Object Detection0
Configurable Foundation Models: Building LLMs from a Modular Perspective0
Enhancing Code-Switching Speech Recognition with LID-Based Collaborative Mixture of Experts Model0
OLMoE: Open Mixture-of-Experts Language ModelsCode4
Duplex: A Device for Large Language Models with Mixture of Experts, Grouped Query Attention, and Continuous Batching0
Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts0
Gradient-free variational learning with conditional mixture networksCode1
Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts0
Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts0
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge DistillationCode3
Parameter-Efficient Quantized Mixture-of-Experts Meets Vision-Language Instruction Tuning for Semiconductor Electron Micrograph Analysis0
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings0
The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities0
La-SoftMoE CLIP for Unified Physical-Digital Face Attack Detection0
Multi-Treatment Multi-Task Uplift Modeling for Enhancing User Growth0
DutyTTE: Deciphering Uncertainty in Origin-Destination Travel Time EstimationCode0
SQL-GEN: Bridging the Dialect Gap for Text-to-SQL Via Synthetic Data And Model Merging0
Jamba-1.5: Hybrid Transformer-Mamba Models at ScaleCode5
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsCode0
MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors RoutingCode0
FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts0
KAN4TSF: Are KAN and KAN-based models Effective for Time Series Forecasting?Code2
HMoE: Heterogeneous Mixture of Experts for Language Modeling0
Navigating Spatio-Temporal Heterogeneity: A Graph Transformer Approach for Traffic ForecastingCode1
AnyGraph: Graph Foundation Model in the WildCode3
AdapMoE: Adaptive Sensitivity-based Expert Gating and Management for Efficient MoE InferenceCode1
A Unified Framework for Iris Anti-Spoofing: Introducing IrisGeneral Dataset and Masked-MoE Method0
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation ModelsCode0
Integrating Multi-view Analysis: Multi-view Mixture-of-Expert for Textual Personality DetectionCode0
FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language ModelsCode0
BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts0
A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning0
AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out StrategiesCode1
Layerwise Recurrent Router for Mixture-of-ExpertsCode1
HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou0
Understanding the Performance and Estimating the Cost of LLM Fine-TuningCode0
LaDiMo: Layer-wise Distillation Inspired MoEfier0
MoC-System: Efficient Fault Tolerance for Sparse Mixture-of-Experts Model Training0
Show:102550
← PrevPage 12 of 27Next →

No leaderboard results yet.