| Boost Your NeRF: A Model-Agnostic Mixture of Experts Framework for High Quality and Efficient Rendering | Jul 15, 2024 | Mixture-of-ExpertsNeRF | —Unverified | 0 |
| MoE-DiffIR: Task-customized Diffusion Priors for Universal Compressed Image Restoration | Jul 15, 2024 | Image RestorationMixture-of-Experts | —Unverified | 0 |
| MaskMoE: Boosting Token-Level Learning via Routing Mask in Mixture-of-Experts | Jul 13, 2024 | DiversityMixture-of-Experts | CodeCode Available | 0 |
| Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts | Jul 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| An Unsupervised Domain Adaptation Method for Locating Manipulated Region in partially fake Audio | Jul 11, 2024 | Data AugmentationDiversity | —Unverified | 0 |
| Swin SMT: Global Sequential Modeling in 3D Medical Image Segmentation | Jul 10, 2024 | Image SegmentationMedical Image Segmentation | CodeCode Available | 1 |
| MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations | Jul 10, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| A Simple Architecture for Enterprise Large Language Model Applications based on Role based security and Clearance Levels using Retrieval-Augmented Generation or Mixture of Experts | Jul 9, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation | Jul 6, 2024 | General KnowledgeImage Segmentation | —Unverified | 0 |
| Completed Feature Disentanglement Learning for Multimodal MRIs Analysis | Jul 6, 2024 | DisentanglementMixture-of-Experts | CodeCode Available | 0 |
| YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | Jul 5, 2024 | Drum TranscriptionDrum Transcription in Music (DTM) | CodeCode Available | 3 |
| MobileFlow: A Multimodal LLM For Mobile GUI Agent | Jul 5, 2024 | Action AnalysisLanguage Modelling | —Unverified | 0 |
| Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement | Jul 5, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Mixture of A Million Experts | Jul 4, 2024 | Computational EfficiencyLanguage Modeling | CodeCode Available | 2 |
| Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models | Jul 2, 2024 | Mixture-of-Expertsparameter-efficient fine-tuning | CodeCode Available | 4 |
| Terminating Differentiable Tree Experts | Jul 2, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs | Jul 1, 2024 | GPUMixture-of-Experts | CodeCode Available | 1 |
| Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation | Jul 1, 2024 | Machine TranslationMixture-of-Experts | —Unverified | 0 |
| Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning | Jul 1, 2024 | Continual LearningMixture-of-Experts | —Unverified | 0 |
| Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model | Jun 28, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models | Jun 28, 2024 | Mixture-of-ExpertsModel Editing | —Unverified | 0 |
| A Teacher Is Worth A Million Instructions | Jun 27, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| Towards Personalized Federated Multi-Scenario Multi-Task Recommendation | Jun 27, 2024 | Federated LearningMixture-of-Experts | —Unverified | 0 |
| A Survey on Mixture of Experts | Jun 26, 2024 | In-Context LearningMixture-of-Experts | CodeCode Available | 3 |
| SC-MoE: Switch Conformer Mixture of Experts for Unified Streaming and Non-streaming Code-Switching ASR | Jun 26, 2024 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 |