SOTAVerified

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts

2024-04-22Code Available3· sign in to hype

Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Fine-tuning Large Language Models (LLMs) is a common practice to adapt pre-trained models for specific applications. While methods like LoRA have effectively addressed GPU memory constraints during fine-tuning, their performance often falls short, especially in multi-task scenarios. In contrast, Mixture-of-Expert (MoE) models, such as Mixtral 8x7B, demonstrate remarkable performance in multi-task learning scenarios while maintaining a reduced parameter count. However, the resource requirements of these MoEs remain challenging, particularly for consumer-grade GPUs with less than 24GB memory. To tackle these challenges, we propose MixLoRA, an approach to construct a resource-efficient sparse MoE model based on LoRA. MixLoRA inserts multiple LoRA-based experts within the feed-forward network block of a frozen pre-trained dense model and employs a commonly used top-k router. Unlike other LoRA-based MoE methods, MixLoRA enhances model performance by utilizing independent attention-layer LoRA adapters. Additionally, an auxiliary load balance loss is employed to address the imbalance problem of the router. Our evaluations show that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios. We also propose a new high-throughput framework to alleviate the computation and memory bottlenecks during the training and inference of MOE models. This framework reduces GPU memory consumption by 40% and token computation latency by 30% during both training and inference.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
arc_challengeLLaMA-2 7B + MixLoRAAccuracy58.1Unverified
arc_challengeLLaMA-2 13B + MixLoRAAccuracy69.9Unverified
arc_challengeLLaMA-3 8B + MixLoRAAccuracy79.9Unverified
arc_easyLLaMA-3 8B + MixLoRAAccuracy86.5Unverified
arc_easyLLaMA-2 13B + MixLoRAAccuracy83.5Unverified
arc_easyLLaMA-2 7B + MixLoRAAccuracy77.7Unverified
WinoGrandeLLaMA-2 7B + MixLoRAAccuracy76.8Unverified
WinoGrandeLLaMA-3 8B + MixLoRAAccuracy82.1Unverified
WinoGrandeLLaMA-2 13B + MixLoRAAccuracy86.3Unverified

Reproductions