| Towards Adversarial Robustness of Model-Level Mixture-of-Experts Architectures for Semantic Segmentation | Dec 16, 2024 | Adversarial RobustnessMixture-of-Experts | CodeCode Available | 0 |
| Enhancing Healthcare Recommendation Systems with a Multimodal LLMs-based MOE Architecture | Dec 16, 2024 | Mixture-of-ExpertsRecommendation Systems | —Unverified | 0 |
| Investigating Mixture of Experts in Dense Retrieval | Dec 16, 2024 | Information RetrievalMixture-of-Experts | —Unverified | 0 |
| Llama 3 Meets MoE: Efficient Upcycling | Dec 13, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| Mixture of Experts Meets Decoupled Message Passing: Towards General and Adaptive Node Classification | Dec 11, 2024 | Computational Efficiency | CodeCode Available | 0 |
| Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective | Dec 11, 2024 | Continual Relation ExtractionMixture-of-Experts | —Unverified | 0 |
| MoE-CAP: Benchmarking Cost, Accuracy and Performance of Sparse Mixture-of-Experts Systems | Dec 10, 2024 | BenchmarkingMixture-of-Experts | —Unverified | 0 |
| UniPaint: Unified Space-time Video Inpainting via Mixture-of-Experts | Dec 9, 2024 | Mixture-of-ExpertsVideo Inpainting | —Unverified | 0 |
| An Entailment Tree Generation Approach for Multimodal Multi-Hop Question Answering with Mixture-of-Experts and Iterative Feedback Mechanism | Dec 8, 2024 | Mixture-of-ExpertsMulti-hop Question Answering | —Unverified | 0 |
| Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head Attention Spiking Transformers | Dec 7, 2024 | Mixture-of-Experts | —Unverified | 0 |