StarGAN-VC: Non-parallel many-to-many voice conversion with star generative adversarial networks
Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/kamepong/StarGAN-VCOfficialpytorch★ 23
- github.com/jackaduma/CycleGAN-VC2pytorch★ 571
- github.com/nafiuny/ICRCycleGAN-VCpytorch★ 15
- github.com/Emilija2000/PSIML6_Voice_style_transferpytorch★ 1
- github.com/deciding/StarGAN-VCpytorch★ 0
- github.com/SamuelBroughton/StarGAN-Voice-Conversionpytorch★ 0
- github.com/augu0093/Voice-Conversion-Projectpytorch★ 0
- github.com/seo3650/Audio_style_transferpytorch★ 0
- github.com/wdmdev/dtu_voice_conversion_projectpytorch★ 0
- github.com/augu0093/Voice-Conversion-Project_StarGAN_Danspeechpytorch★ 0
Abstract
This paper proposes a method that allows non-parallel many-to-many voice conversion (VC) by using a variant of a generative adversarial network (GAN) called StarGAN. Our method, which we call StarGAN-VC, is noteworthy in that it (1) requires no parallel utterances, transcriptions, or time alignment procedures for speech generator training, (2) simultaneously learns many-to-many mappings across different attribute domains using a single generator network, (3) is able to generate converted speech signals quickly enough to allow real-time implementations and (4) requires only several minutes of training examples to generate reasonably realistic-sounding speech. Subjective evaluation experiments on a non-parallel many-to-many speaker identity conversion task revealed that the proposed method obtained higher sound quality and speaker similarity than a state-of-the-art method based on variational autoencoding GANs.