LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, Yongqiang Ma
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/hiyouga/llama-factoryOfficialIn paperpytorch★ 68,911
- github.com/juyongjiang/codeuppytorch★ 127
- github.com/Rcrossmeister/RLQGpytorch★ 48
- github.com/chen-gx/toolevonone★ 13
- github.com/hiyouga/hiyougatf★ 9
- github.com/smelliecat/aaemimenone★ 1
- github.com/Rcrossmeister/Knowledge-to-SQLpytorch★ 1
- github.com/BachOzean/TadEpytorch★ 0
Abstract
Efficient fine-tuning is vital for adapting large language models (LLMs) to downstream tasks. However, it requires non-trivial efforts to implement these methods on different models. We present LlamaFactory, a unified framework that integrates a suite of cutting-edge efficient training methods. It provides a solution for flexibly customizing the fine-tuning of 100+ LLMs without the need for coding through the built-in web UI LlamaBoard. We empirically validate the efficiency and effectiveness of our framework on language modeling and text generation tasks. It has been released at https://github.com/hiyouga/LLaMA-Factory and received over 25,000 stars and 3,000 forks.