Self-supervised Fine-tuning for Improved Content Representations by Speaker-invariant Clustering
2023-05-18Code Available1· sign in to hype
Heng-Jui Chang, Alexander H. Liu, James Glass
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vectominist/spinOfficialIn paperpytorch★ 64
Abstract
Self-supervised speech representation models have succeeded in various tasks, but improving them for content-related problems using unlabeled data is challenging. We propose speaker-invariant clustering (Spin), a novel self-supervised learning method that clusters speech representations and performs swapped prediction between the original and speaker-perturbed utterances. Spin disentangles speaker information and preserves content representations with just 45 minutes of fine-tuning on a single GPU. Spin improves pre-trained networks and outperforms prior methods in speech recognition and acoustic unit discovery.