DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/huggingface/transformersOfficialIn paperpytorch★ 158,292
- github.com/huggingface/swift-coreml-transformersOfficialIn paperpytorch★ 0
- github.com/allenai/scifactpytorch★ 251
- github.com/suinleelab/path_explaintf★ 192
- github.com/epfml/collaborative-attentionpytorch★ 152
- github.com/sdadas/polish-robertapytorch★ 91
- github.com/jaketae/pytorch-malware-detectionpytorch★ 81
- github.com/philschmid/knowledge-distillation-transformers-pytorch-sagemakerpytorch★ 48
- github.com/stefan-it/europeana-berttf★ 39
- github.com/facebookresearch/EgoTVpytorch★ 27
Abstract
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CoLA | DistilBERT 66M | Accuracy | 49.1 | — | Unverified |