fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit
Changhan Wang, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Ann Lee, Peng-Jen Chen, Jiatao Gu, Juan Pino
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/pytorch/fairseqOfficialIn paperpytorch★ 32,198
- github.com/Mind23-2/MindCode-101/tree/main/IntegralNeuralNetworksmindspore★ 0
- github.com/2023-MindSpore-4/Code-5/tree/main/IntegralNeuralNetworksmindspore★ 0
- github.com/Mind23-2/MindCode-3/tree/main/IntegralNeuralNetworksmindspore★ 0
Abstract
This paper presents fairseq S^2, a fairseq extension for speech synthesis. We implement a number of autoregressive (AR) and non-AR text-to-speech models, and their multi-speaker variants. To enable training speech synthesis models with less curated data, a number of preprocessing tools are built and their importance is shown empirically. To facilitate faster iteration of development and analysis, a suite of automatic metrics is included. Apart from the features added specifically for this extension, fairseq S^2 also benefits from the scalability offered by fairseq and can be easily integrated with other state-of-the-art systems provided in this framework. The code, documentation, and pre-trained models are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_synthesis.