UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data
Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/cywang97/unispeechOfficialIn paperpytorch★ 9
- github.com/microsoft/unispeechpytorch★ 479
- github.com/facebookresearch/data2vec_visionpytorch★ 80
- github.com/MS-P3/code7/tree/main/unispeech_satmindspore★ 0
- github.com/MindCode-4/code-5/tree/main/unispeechmindspore★ 0
Abstract
In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.