| LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models | Mar 27, 2025 | Mixture-of-Experts | —Unverified | 0 |
| iMedImage Technical Report | Mar 27, 2025 | Anomaly DetectionDiagnostic | —Unverified | 0 |
| Reasoning Beyond Limits: Advances and Open Problems for LLMs | Mar 26, 2025 | Mixture-of-ExpertsRAG | —Unverified | 0 |
| MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation | Mar 26, 2025 | Knowledge DistillationMixture-of-Experts | —Unverified | 0 |
| Modality-Independent Brain Lesion Segmentation with Privacy-aware Continual Learning | Mar 26, 2025 | Continual LearningKnowledge Distillation | CodeCode Available | 0 |
| Optimal Scaling Laws for Efficiency Gains in a Theoretical Transformer-Augmented Sectional MoE Framework | Mar 26, 2025 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning | Mar 26, 2025 | Mixture-of-Expertsparameter-efficient fine-tuning | —Unverified | 0 |
| A multi-scale lithium-ion battery capacity prediction using mixture of experts and patch-based MLP | Mar 26, 2025 | Mixture-of-Experts | CodeCode Available | 0 |
| M^2CD: A Unified MultiModal Framework for Optical-SAR Change Detection with Mixture of Experts and Self-Distillation | Mar 25, 2025 | Change DetectionDisaster Response | —Unverified | 0 |
| Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion | Mar 25, 2025 | Autonomous DrivingMixture-of-Experts | —Unverified | 0 |