| A Unified Virtual Mixture-of-Experts Framework:Enhanced Inference and Hallucination Mitigation in Single-Model System | Apr 1, 2025 | Dialogue GenerationEnsemble Learning | —Unverified | 0 | 0 |
| How Do Consumers Really Choose: Exposing Hidden Preferences with the Mixture of Experts Model | Mar 3, 2025 | Decision MakingDemand Forecasting | —Unverified | 0 | 0 |
| How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing? | May 1, 2022 | Entity TypingMixture-of-Experts | —Unverified | 0 | 0 |
| HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts | Nov 23, 2023 | Compositional Zero-Shot LearningMixture-of-Experts | —Unverified | 0 | 0 |
| HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou | Aug 10, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 | 0 |
| A Unified Framework for Iris Anti-Spoofing: Introducing IrisGeneral Dataset and Masked-MoE Method | Aug 19, 2024 | Iris RecognitionMixture-of-Experts | —Unverified | 0 | 0 |
| Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models | Apr 9, 2025 | Instruction FollowingMathematical Problem-Solving | —Unverified | 0 | 0 |
| HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference | Nov 3, 2024 | Mixture-of-Experts | —Unverified | 0 | 0 |
| HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization | Nov 15, 2022 | Domain GeneralizationMixture-of-Experts | —Unverified | 0 | 0 |
| HMoE: Heterogeneous Mixture of Experts for Language Modeling | Aug 20, 2024 | Computational EfficiencyLanguage Modeling | —Unverified | 0 | 0 |