SOTAVerified

Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition

2020-10-20Code Available1· sign in to hype

Yu Zhang, James Qin, Daniel S. Park, Wei Han, Chung-Cheng Chiu, Ruoming Pang, Quoc V. Le, Yonghui Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We employ a combination of recent developments in semi-supervised learning for automatic speech recognition to obtain state-of-the-art results on LibriSpeech utilizing the unlabeled audio of the Libri-Light dataset. More precisely, we carry out noisy student training with SpecAugment using giant Conformer models pre-trained using wav2vec 2.0 pre-training. By doing so, we are able to achieve word-error-rates (WERs) 1.4%/2.6% on the LibriSpeech test/test-other sets against the current state-of-the-art WERs 1.7%/3.3%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LibriSpeech test-cleanConformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-LightWord Error Rate (WER)1.4Unverified
LibriSpeech test-otherConformer + Wav2vec 2.0 + SpecAugment-based Noisy Student Training with Libri-LightWord Error Rate (WER)2.6Unverified

Reproductions