Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning
Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, John Miller
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/kaiidams/voice100-ttspytorch★ 0
- github.com/HaiFengZeng/clari_wavenet_vocoderpytorch★ 0
- github.com/mitsu-h/deepvoice3torch★ 0
- github.com/kinimod23/ATS_Projecttf★ 0
- github.com/r9y9/deepvoice3_pytorchpytorch★ 0
- github.com/kaiidams/voice100pytorch★ 0
- github.com/TartuNLP/deepvoice3_pytorchpytorch★ 0
Abstract
We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training ten times faster. We scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on one single-GPU server.