A Functional Perspective on Knowledge Distillation in Neural Networks
Israel Mason-Williams, Gabryel Mason-Williams, Helen Yannakoudakis
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Knowledge distillation is considered a compression mechanism when judged on the resulting student's accuracy and loss, yet its functional impact is poorly understood. We quantify the compression capacity of knowledge distillation and the resulting knowledge transfer from a functional perspective, decoupling compression from architectural reduction to provide an improved understanding of knowledge distillation. We employ a control-driven experimental protocol with hypothesis testing and random control distillation to isolate and understand knowledge transfer mechanisms across data modalities. To test the breadth and limits of our analyses, we study self-distillation, standard distillation, feature-map matching variants, distillation scaling laws across model sizes, and the impact of temperature on knowledge transfer. We find statistically supported knowledge transfer in some modalities and architectures; however, the extent of this transfer is less pronounced than anticipated, even under conditions that maximise knowledge sharing. Notably, in cases of significant functional transfer, we identify a consistent and severe asymmetric transfer of negative knowledge to the student, raising safety concerns for knowledge distillation. Across 22 experimental setups, 9 architectures, and 7 datasets, our results suggest that knowledge distillation functions less as a robust compression-by-transfer mechanism and more as a data-dependent regulariser whose transfer component is biased towards negative asymmetric transfer.