SOTAVerified

Knowledge Distillation for Sustainable Neural Machine Translation

2022-09-01AMTA 2022Unverified0· sign in to hype

Wandri Jooste, Andy Way, Rejwanul Haque, Riccardo Superbo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Knowledge distillation (KD) can be used to reduce model size and training time, without significant loss in performance. However, the process of distilling knowledge requires translation of sizeable data sets, and the translation is usually performed using large cumbersome models (teacher models). Producing such translations for KD is expensive in terms of both time and cost, which is a significant concern for translation service providers. On top of that, this process can be the cause of higher carbon footprints. In this work, we tested different variants of a teacher model for KD, tracked the power consumption of the GPUs used during translation, recorded overall translation time, estimated translation cost, and measured the accuracy of the student models. The findings of our investigation demonstrate to the translation industry a cost-effective, high-quality alternative to the standard KD training methods.

Tasks

Reproductions