SOTAVerified

Semi-Supervised Speech Recognition via Local Prior Matching

2020-02-24Code Available3· sign in to hype

Wei-Ning Hsu, Ann Lee, Gabriel Synnaeve, Awni Hannun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

For sequence transduction tasks like speech recognition, a strong structured prior model encodes rich information about the target space, implicitly ruling out invalid sequences by assigning them low probability. In this work, we propose local prior matching (LPM), a semi-supervised objective that distills knowledge from a strong prior (e.g. a language model) to provide learning signal to a discriminative model trained on unlabeled speech. We demonstrate that LPM is theoretically well-motivated, simple to implement, and superior to existing knowledge distillation techniques under comparable settings. Starting from a baseline trained on 100 hours of labeled speech, with an additional 360 hours of unlabeled data, LPM recovers 54% and 73% of the word error rate on clean and noisy test sets relative to a fully supervised model on the same data.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LibriSpeech test-cleanLocal Prior Matching (Large Model)Word Error Rate (WER)7.19Unverified
LibriSpeech test-otherLocal Prior Matching (Large Model, ConvLM LM)Word Error Rate (WER)15.28Unverified
LibriSpeech test-otherLocal Prior Matching (Large Model)Word Error Rate (WER)20.84Unverified

Reproductions