SOTAVerified

Self-training and Pre-training are Complementary for Speech Recognition

2020-10-22Code Available0· sign in to hype

Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, Michael Auli

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LibriSpeech test-cleanConv + Transformer + wav2vec2.0 + pseudo labelingWord Error Rate (WER)1.5Unverified
LibriSpeech test-cleanwav2vec_wav2letterWord Error Rate (WER)2.7Unverified
LibriSpeech test-otherConv + Transformer + wav2vec2.0 + pseudo labelingWord Error Rate (WER)3.1Unverified
LibriSpeech train-clean-100 test-cleanwav2vec_wav2letterWord Error Rate (WER)2.8Unverified
LibriSpeech train-clean-100 test-otherwav2vec_wav2letterWord Error Rate (WER)3.6Unverified

Reproductions