SOTAVerified

KD^2M: An unifying framework for feature knowledge distillation

2025-04-02Unverified0· sign in to hype

Eduardo Fernandes Montesuma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Knowledge Distillation (KD) seeks to transfer the knowledge of a teacher, towards a student neural net. This process is often done by matching the networks' predictions (i.e., their output), but, recently several works have proposed to match the distributions of neural nets' activations (i.e., their features), a process known as distribution matching. In this paper, we propose an unifying framework, Knowledge Distillation through Distribution Matching (KD^2M), which formalizes this strategy. Our contributions are threefold. We i) provide an overview of distribution metrics used in distribution matching, ii) benchmark on computer vision datasets, and iii) derive new theoretical results for KD.

Tasks

Reproductions