SOTAVerified

Preparing Lessons: Improve Knowledge Distillation with Better Supervision

2019-11-18Code Available1· sign in to hype

Tiancheng Wen, Shenqi Lai, Xueming Qian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Knowledge distillation (KD) is widely used for training a compact model with the supervision of another large model, which could effectively improve the performance. Previous methods mainly focus on two aspects: 1) training the student to mimic representation space of the teacher; 2) training the model progressively or adding extra module like discriminator. Knowledge from teacher is useful, but it is still not exactly right compared with ground truth. Besides, overly uncertain supervision also influences the result. We introduce two novel approaches, Knowledge Adjustment (KA) and Dynamic Temperature Distillation (DTD), to penalize bad supervision and improve student model. Experiments on CIFAR-100, CINIC-10 and Tiny ImageNet show that our methods get encouraging performance compared with state-of-the-art methods. When combined with other KD-based methods, the performance will be further improved.

Tasks

Reproductions