Fine-Grained and Interpretable Neural Speech Editing
Max Morrison, Cameron Churchwell, Nathan Pruyne, Bryan Pardo
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/maxrmorrison/torbiOfficialIn paperpytorch★ 42
Abstract
Fine-grained editing of speech attributesx2014such as prosody (i.e., the pitch, loudness, and phoneme durations), pronunciation, speaker identity, and formantsx2014is useful for fine-tuning and fixing imperfections in human and AI-generated speech recordings for creation of podcasts, film dialogue, and video game dialogue. Existing speech synthesis systems use representations that entangle two or more of these attributes, prohibiting their use in fine-grained, disentangled editing. In this paper, we demonstrate the first disentangled and interpretable representation of speech with comparable subjective and objective vocoding reconstruction accuracy to Mel spectrograms. Our interpretable representation, combined with our proposed data augmentation method, enables training an existing neural vocoder to perform fast, accurate, and high-quality editing of pitch, duration, volume, timbral correlates of volume, pronunciation, speaker identity, and spectral balance.