| Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models | Apr 9, 2025 | Instruction FollowingMathematical Problem-Solving | —Unverified | 0 | 0 |
| HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou | Aug 10, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 | 0 |
| HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts | Nov 23, 2023 | Compositional Zero-Shot LearningMixture-of-Experts | —Unverified | 0 | 0 |
| How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing? | May 1, 2022 | Entity TypingMixture-of-Experts | —Unverified | 0 | 0 |
| How Do Consumers Really Choose: Exposing Hidden Preferences with the Mixture of Experts Model | Mar 3, 2025 | Decision MakingDemand Forecasting | —Unverified | 0 | 0 |
| How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers | Mar 4, 2024 | Few-Shot LearningLanguage Modeling | —Unverified | 0 | 0 |
| How Lightweight Can A Vision Transformer Be | Jul 25, 2024 | Mixture-of-ExpertsTransfer Learning | —Unverified | 0 | 0 |
| How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines | Feb 17, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought | May 21, 2025 | ChatbotInstruction Following | —Unverified | 0 | 0 |
| HydraSum - Disentangling Stylistic Features in Text Summarization using Multi-Decoder Models | Sep 29, 2021 | Abstractive Text SummarizationDecoder | —Unverified | 0 | 0 |