Movement Pruning: Adaptive Sparsity by Fine-Tuning
Victor Sanh, Thomas Wolf, Alexander M. Rush
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/huggingface/transformers/tree/master/examples/movement-pruningOfficialpytorch★ 0
- github.com/huggingface/nn_pruningpytorch★ 406
- github.com/huggingface/block_movement_pruningpytorch★ 83
- github.com/kainoj/pruning-biaspytorch★ 7
Abstract
Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.