| LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration | Oct 20, 2024 | AllComputational Efficiency | CodeCode Available | 2 |
| Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free | Oct 14, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts | Oct 10, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More | Oct 8, 2024 | Mixture-of-ExpertsQuantization | CodeCode Available | 2 |
| Open-RAG: Enhanced Retrieval-Augmented Reasoning with Open-Source Large Language Models | Oct 2, 2024 | Mixture-of-ExpertsNavigate | CodeCode Available | 2 |
| CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling | Sep 28, 2024 | image-classificationImage Classification | CodeCode Available | 2 |
| MiniDrive: More Efficient Vision-Language Models with Multi-Level 2D Features as Text Tokens for Autonomous Driving | Sep 11, 2024 | Autonomous DrivingFeature Engineering | CodeCode Available | 2 |
| KAN4TSF: Are KAN and KAN-based models Effective for Time Series Forecasting? | Aug 21, 2024 | Mixture-of-ExpertsTime Series | CodeCode Available | 2 |
| Mixture of A Million Experts | Jul 4, 2024 | Computational EfficiencyLanguage Modeling | CodeCode Available | 2 |