PseudoVC: Improving One-shot Voice Conversion with Pseudo Paired Data
Songjun Cao, Qinghua Wu, Jie Chen, Jin Li, Long Ma
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
As parallel training data is scarce for one-shot voice conversion (VC) tasks, waveform reconstruction is typically performed by various VC systems. A typical one-shot VC system comprises a content encoder and a speaker encoder. However, two types of mismatches arise: one for the inputs to the content encoder during training and inference, and another for the inputs to the speaker encoder. To address these mismatches, we propose a novel VC training method called PseudoVC in this paper. First, we introduce an innovative information perturbation approach named Pseudo Conversion to tackle the first mismatch problem. This approach leverages pretrained VC models to convert the source utterance into a perturbed utterance, which is fed into the content encoder during training. Second, we propose an approach termed Speaker Sampling to resolve the second mismatch problem, which will substitute the input to the speaker encoder by another utterance from the same speaker during training. Experimental results demonstrate that our proposed Pseudo Conversion outperforms previous information perturbation methods, and the overall PseudoVC method surpasses publicly available VC models. Audio examples are available.