Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition
2020-10-20Code Available1· sign in to hype
Yu Zhang, James Qin, Daniel S. Park, Wei Han, Chung-Cheng Chiu, Ruoming Pang, Quoc V. Le, Yonghui Wu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/tuanio/noisy-student-training-asrpytorch★ 99
Abstract
We employ a combination of recent developments in semi-supervised learning for automatic speech recognition to obtain state-of-the-art results on LibriSpeech utilizing the unlabeled audio of the Libri-Light dataset. More precisely, we carry out noisy student training with SpecAugment using giant Conformer models pre-trained using wav2vec 2.0 pre-training. By doing so, we are able to achieve word-error-rates (WERs) 1.4%/2.6% on the LibriSpeech test/test-other sets against the current state-of-the-art WERs 1.7%/3.3%.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| LibriSpeech test-clean | Conformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-Light | Word Error Rate (WER) | 1.4 | — | Unverified |
| LibriSpeech test-other | Conformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-Light | Word Error Rate (WER) | 2.6 | — | Unverified |