| MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization | Nov 1, 2024 | 8kMixture-of-Experts | CodeCode Available | 0 |
| MoE-I^2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition | Nov 1, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models | Nov 1, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Stereo-Talker: Audio-driven 3D Human Synthesis with Prior-Guided Mixture-of-Experts | Oct 31, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Efficient and Interpretable Grammatical Error Correction with Mixture of Experts | Oct 30, 2024 | Grammatical Error CorrectionMixture-of-Experts | CodeCode Available | 0 |
| Stealing User Prompts from Mixture of Experts | Oct 30, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning | Oct 30, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| ProMoE: Fast MoE-based LLM Serving using Proactive Caching | Oct 29, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Neural Experts: Mixture of Experts for Implicit Neural Representations | Oct 29, 2024 | Image ReconstructionMixture-of-Experts | —Unverified | 0 |
| Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging | Oct 29, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| FinTeamExperts: Role Specialized MOEs For Financial Analysis | Oct 28, 2024 | Financial AnalysisMixture-of-Experts | —Unverified | 0 |
| Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving | Oct 28, 2024 | Autonomous DrivingMixture-of-Experts | —Unverified | 0 |
| DMT-HI: MOE-based Hyperbolic Interpretable Deep Manifold Transformation for Unspervised Dimensionality Reduction | Oct 25, 2024 | Dimensionality ReductionMixture-of-Experts | CodeCode Available | 1 |
| Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis | Oct 25, 2024 | High-Level SynthesisMixture-of-Experts | CodeCode Available | 0 |
| Mixture of Parrots: Experts improve memorization more than reasoning | Oct 24, 2024 | MathMemorization | —Unverified | 0 |
| Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | Oct 24, 2024 | Mixture-of-ExpertsMMLU | CodeCode Available | 1 |
| MoMQ: Mixture-of-Experts Enhances Multi-Dialect Query Generation across Relational and Non-Relational Databases | Oct 24, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Robust and Explainable Depression Identification from Speech Using Vowel-Based Ensemble Learning Approaches | Oct 23, 2024 | Ensemble LearningMixture-of-Experts | —Unverified | 0 |
| Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition | Oct 23, 2024 | Code GenerationMixture-of-Experts | —Unverified | 0 |
| MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning | Oct 23, 2024 | MathMixture-of-Experts | —Unverified | 0 |
| ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference | Oct 23, 2024 | Computational EfficiencyCPU | —Unverified | 0 |
| Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling | Oct 22, 2024 | AllGPU | —Unverified | 0 |
| LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset | Oct 21, 2024 | Image DehazingMamba | CodeCode Available | 1 |
| Generalizing Motion Planners with Mixture of Experts for Autonomous Driving | Oct 21, 2024 | Autonomous DrivingData Augmentation | CodeCode Available | 3 |
| CartesianMoE: Boosting Knowledge Sharing among Experts via Cartesian Product Routing in Mixture-of-Experts | Oct 21, 2024 | Mixture-of-Experts | CodeCode Available | 0 |