Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty
2023-08-03Code Available1· sign in to hype
Inar Timiryasov, Jean-Loup Tastet
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/timinar/babyllamaOfficialIn paperpytorch★ 85
Abstract
We present our submission to the BabyLM challenge, whose goal was to improve the sample efficiency of language models. We trained an ensemble consisting of a GPT-2 and small LLaMA models on the developmentally-plausible, 10M-word BabyLM dataset, then distilled it into a small, 58M-parameter LLaMA model, which exceeds in performance both of its teachers as well as a similar model trained without distillation. This suggests that distillation can not only retain the full performance of the teacher model when the latter is trained on a sufficiently small dataset; it can exceed it, and lead to significantly better performance than direct training.