| Leveraging MoE-based Large Language Model for Zero-Shot Multi-Task Semantic Communication | Mar 19, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 | 0 |
| Leveraging Pre-Trained Models for Multimodal Class-Incremental Learning under Adaptive Fusion | Feb 7, 2025 | class-incremental learningClass Incremental Learning | —Unverified | 0 | 0 |
| Lifelong Evolution: Collaborative Learning between Large and Small Language Models for Continuous Emergent Fake News Detection | Jun 5, 2025 | Fake News Detectionknowledge editing | —Unverified | 0 | 0 |
| Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts | Nov 23, 2024 | knowledge editingMixture-of-Experts | —Unverified | 0 | 0 |
| Lifelong Language Pretraining with Distribution-Specialized Experts | May 20, 2023 | Lifelong learningMixture-of-Experts | —Unverified | 0 | 0 |
| Little By Little: Continual Learning via Self-Activated Sparse Mixture-of-Rank Adaptive Learning | Jun 26, 2025 | Continual LearningMixture-of-Experts | —Unverified | 0 | 0 |
| Llama 3 Meets MoE: Efficient Upcycling | Dec 13, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 | 0 |
| LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models | Mar 27, 2025 | Mixture-of-Experts | —Unverified | 0 | 0 |
| LLaVA-MoLE: Sparse Mixture of LoRA Experts for Mitigating Data Conflicts in Instruction Finetuning MLLMs | Jan 29, 2024 | Language ModellingLarge Language Model | —Unverified | 0 | 0 |
| LLM4WM: Adapting LLM for Wireless Multi-Tasking | Jan 22, 2025 | General KnowledgeLanguage Modeling | —Unverified | 0 | 0 |