| MH-MoE: Multi-Head Mixture-of-Experts | Nov 25, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts | Nov 23, 2024 | knowledge editingMixture-of-Experts | —Unverified | 0 |
| MERLOT: A Distilled LLM-based Mixture-of-Experts Framework for Scalable Encrypted Traffic Classification | Nov 20, 2024 | DecoderLanguage Modeling | —Unverified | 0 |
| KAAE: Numerical Reasoning for Knowledge Graphs via Knowledge-aware Attributes Learning | Nov 20, 2024 | AttributeContrastive Learning | —Unverified | 0 |
| Ultra-Sparse Memory Network | Nov 19, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs | Nov 18, 2024 | Computational EfficiencyCPU | —Unverified | 0 |
| Weakly-Supervised Multimodal Learning on MIMIC-CXR | Nov 15, 2024 | Data IntegrationMixture-of-Experts | CodeCode Available | 0 |
| Sparse Upcycling: Inference Inefficient Finetuning | Nov 13, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection | Nov 13, 2024 | Code GenerationMathematical Reasoning | —Unverified | 0 |
| Towards Vision Mixture of Experts for Wildlife Monitoring on the Edge | Nov 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model | Nov 12, 2024 | Arithmetic ReasoningMixture-of-Experts | —Unverified | 0 |
| Imitation Learning from Observations: An Autoregressive Mixture of Experts Approach | Nov 12, 2024 | Autonomous DrivingImitation Learning | —Unverified | 0 |
| Adaptive Conditional Expert Selection Network for Multi-domain Recommendation | Nov 11, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| WDMoE: Wireless Distributed Mixture of Experts for Large Language Models | Nov 11, 2024 | Mixture-of-Experts | —Unverified | 0 |
| NeKo: Toward Post Recognition Generative Correction Large Language Models with Task-Oriented Experts | Nov 8, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | —Unverified | 0 |
| DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts | Nov 5, 2024 | Mixture-of-ExpertsSensitivity | CodeCode Available | 0 |
| Advancing Robust Underwater Acoustic Target Recognition through Multi-task Learning and Multi-Gate Mixture-of-Experts | Nov 5, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation | Nov 4, 2024 | Federated LearningMixture-of-Experts | —Unverified | 0 |
| HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference | Nov 3, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation | Nov 3, 2024 | Mixture-of-ExpertsSequential Recommendation | —Unverified | 0 |
| RS-MoE: Mixture of Experts for Remote Sensing Image Captioning and Visual Question Answering | Nov 3, 2024 | DescriptiveImage Captioning | —Unverified | 0 |
| PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment | Nov 2, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MoNTA: Accelerating Mixture-of-Experts Training with Network-Traffc-Aware Parallel Optimization | Nov 1, 2024 | 8kMixture-of-Experts | CodeCode Available | 0 |
| MoE-I^2: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition | Nov 1, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| Stereo-Talker: Audio-driven 3D Human Synthesis with Prior-Guided Mixture-of-Experts | Oct 31, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |