SOTAVerified

Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition

2018-11-12Unverified0· sign in to hype

Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We investigate the feasibility of sequence-level knowledge distillation of Sequence-to-Sequence (Seq2Seq) models for Large Vocabulary Continuous Speech Recognition (LVSCR). We first use a pre-trained larger teacher model to generate multiple hypotheses per utterance with beam search. With the same input, we then train the student model using these hypotheses generated from the teacher as pseudo labels in place of the original ground truth labels. We evaluate our proposed method using Wall Street Journal (WSJ) corpus. It achieved up to 9.8 parameter reduction with accuracy loss of up to 7.0\% word-error rate (WER) increase

Tasks

Reproductions