| A Generalist Cross-Domain Molecular Learning Framework for Structure-Based Drug Discovery | Mar 6, 2025 | DenoisingDrug Discovery | —Unverified | 0 |
| Predictable Scale: Part I -- Optimal Hyperparameter Scaling Law in Large Language Model Pretraining | Mar 6, 2025 | GPUHyperparameter Optimization | —Unverified | 0 |
| Speculative MoE: Communication Efficient Parallel MoE Inference with Speculative Token and Expert Pre-scheduling | Mar 6, 2025 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| Convergence Rates for Softmax Gating Mixture of Experts | Mar 5, 2025 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |
| BrainNet-MoE: Brain-Inspired Mixture-of-Experts Learning for Neurological Disease Identification | Mar 5, 2025 | Mixture-of-Experts | —Unverified | 0 |
| VoiceGRPO: Modern MoE Transformers with Group Relative Policy Optimization GRPO for AI Voice Health Care Applications on Voice Pathology Detection | Mar 5, 2025 | DiagnosticMixture-of-Experts | CodeCode Available | 0 |
| Tabby: Tabular Data Synthesis with Language Models | Mar 4, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Union of Experts: Adapting Hierarchical Routing to Equivalently Decomposed Transformer | Mar 4, 2025 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 0 |
| How Do Consumers Really Choose: Exposing Hidden Preferences with the Mixture of Experts Model | Mar 3, 2025 | Decision MakingDemand Forecasting | —Unverified | 0 |
| PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation | Mar 3, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 |