| FaVChat: Unlocking Fine-Grained Facail Video Understanding with Multimodal Large Language Models | Mar 12, 2025 | Mixture-of-ExpertsQuestion Answering | —Unverified | 0 |
| Towards Robust Multimodal Representation: A Unified Approach with Adaptive Experts and Alignment | Mar 12, 2025 | Contrastive LearningDecision Making | CodeCode Available | 0 |
| Double-Stage Feature-Level Clustering-Based Mixture of Experts Framework | Mar 12, 2025 | ClusteringDiversity | —Unverified | 0 |
| Priority-Aware Preemptive Scheduling for Mixed-Priority Workloads in MoE Inference | Mar 12, 2025 | BlockingGPU | —Unverified | 0 |
| Automatic Operator-level Parallelism Planning for Distributed Deep Learning -- A Mixed-Integer Programming Approach | Mar 12, 2025 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| MoRE: Unlocking Scalability in Reinforcement Learning for Quadruped Vision-Language-Action Models | Mar 11, 2025 | Large Language ModelMixture-of-Experts | —Unverified | 0 |
| UniF^2ace: Fine-grained Face Understanding and Generation with Unified Multimodal Models | Mar 11, 2025 | AttributeMixture-of-Experts | —Unverified | 0 |
| MoE-Loco: Mixture of Experts for Multitask Locomotion | Mar 11, 2025 | Mixture-of-Experts | —Unverified | 0 |
| Accelerating MoE Model Inference with Expert Sharding | Mar 11, 2025 | DecoderGPU | —Unverified | 0 |
| GM-MoE: Low-Light Enhancement with Gated-Mechanism Mixture-of-Experts | Mar 10, 2025 | 3D ReconstructionAutonomous Driving | —Unverified | 0 |
| A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications | Mar 10, 2025 | Continual LearningMeta-Learning | CodeCode Available | 9 |
| eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference | Mar 10, 2025 | Mixture-of-ExpertsScheduling | —Unverified | 0 |
| ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration | Mar 10, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| Swift Hydra: Self-Reinforcing Generative Framework for Anomaly Detection with Multiple Mamba Models | Mar 9, 2025 | Anomaly DetectionMamba | CodeCode Available | 0 |
| MoFE: Mixture of Frozen Experts Architecture | Mar 9, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 |
| MANDARIN: Mixture-of-Experts Framework for Dynamic Delirium and Coma Prediction in ICU Patients: Development and Validation of an Acute Brain Dysfunction Prediction Model | Mar 8, 2025 | Mixture-of-Experts | —Unverified | 0 |
| A Novel Trustworthy Video Summarization Algorithm Through a Mixture of LoRA Experts | Mar 8, 2025 | Mixture-of-ExpertsVideo Summarization | —Unverified | 0 |
| MoEMoE: Question Guided Dense and Scalable Sparse Mixture-of-Expert for Multi-source Multi-modal Answering | Mar 8, 2025 | Answer GenerationMixture-of-Experts | —Unverified | 0 |
| Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts | Mar 7, 2025 | Mixture-of-Experts | —Unverified | 0 |
| FMT:A Multimodal Pneumonia Detection Model Based on Stacking MOE Framework | Mar 7, 2025 | DiagnosticMedical Image Analysis | —Unverified | 0 |
| Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs | Mar 7, 2025 | Knowledge GraphsMixture-of-Experts | —Unverified | 0 |
| Symbolic Mixture-of-Experts: Adaptive Skill-based Routing for Heterogeneous Reasoning | Mar 7, 2025 | GPUMath | —Unverified | 0 |
| Linear-MoE: Linear Sequence Modeling Meets Mixture-of-Experts | Mar 7, 2025 | Mixture-of-ExpertsState Space Models | CodeCode Available | 2 |
| Continual Pre-training of MoEs: How robust is your router? | Mar 6, 2025 | DecoderMixture-of-Experts | —Unverified | 0 |
| TS-RAG: Retrieval-Augmented Generation based Time Series Foundation Models are Stronger Zero-Shot Forecaster | Mar 6, 2025 | Domain AdaptationMixture-of-Experts | —Unverified | 0 |