SOTAVerified

Speech Synthesis

Speech synthesis is the task of generating speech from some other modality like text, lip movements etc.

Please note that the leaderboards here are not really comparable between studies - as they use mean opinion score as a metric and collect different samples from Amazon Mechnical Turk.

( Image credit: WaveNet: A generative model for raw audio )

Papers

Showing 101150 of 1249 papers

TitleStatusHype
Mitigating Unauthorized Speech Synthesis for Voice ProtectionCode1
MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied BaselineCode1
End-to-End Adversarial Text-to-SpeechCode1
MelGAN: Generative Adversarial Networks for Conditional Waveform SynthesisCode1
Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context ModelingCode1
TTS-Portuguese Corpus: a corpus for speech synthesis in Brazilian PortugueseCode1
Evaluating Speech Synthesis by Training Recognizers on Synthetic SpeechCode1
Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech UnderstandingCode1
Articulation GAN: Unsupervised modeling of articulatory learningCode1
Mixer-TTS: non-autoregressive, fast and compact text-to-speech model conditioned on language model embeddingsCode1
Meta-TTS: Meta-Learning for Few-Shot Speaker Adaptive Text-to-SpeechCode1
Generative Expressive Conversational Speech SynthesisCode1
ArTST: Arabic Text and Speech TransformerCode1
APNet2: High-quality and High-efficiency Neural Vocoder with Direct Prediction of Amplitude and Phase SpectraCode1
ASR data augmentation in low-resource settings using cross-lingual multi-speaker TTS and cross-lingual voice conversionCode1
A Spectral Energy Distance for Parallel Speech SynthesisCode1
EmoSpeech: Guiding FastSpeech2 Towards Emotional Text to SpeechCode1
Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis TechniquesCode1
Embedding a Differentiable Mel-cepstral Synthesis Filter to a Neural Speech Synthesis SystemCode1
EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative storytelling in games, television and graphic novelsCode1
End-to-End Zero-Shot Voice Conversion with Location-Variable ConvolutionsCode1
From Speaker Verification to Multispeaker Speech Synthesis, Deep Transfer with Feedback ConstraintCode1
Effective Deep Learning Models for Automatic Diacritization of Arabic TextCode1
Lip to Speech Synthesis with Visual Context Attentional GANCode1
EdiTTS: Score-based Editing for Controllable Text-to-SpeechCode1
A Survey on Non-Autoregressive Generation for Neural Machine Translation and BeyondCode1
Dynamical Variational Autoencoders: A Comprehensive ReviewCode1
Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech SynthesisCode1
dMel: Speech Tokenization made SimpleCode1
Disentanglement in a GAN for Unconditional Speech SynthesisCode1
EfficientNet-Absolute Zero for Continuous Speech Keyword SpottingCode1
Learning pronunciation from a foreign language in speech synthesis networksCode1
Lip-to-Speech Synthesis in the Wild with Multi-task LearningCode1
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling SchemeCode1
Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found DataCode1
DiffV2S: Diffusion-based Video-to-Speech Synthesis with Vision-guided Speaker EmbeddingCode1
DiffProsody: Diffusion-based Latent Prosody Generation for Expressive Speech Synthesis with Prosody Conditional Adversarial TrainingCode1
A^3T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and EditingCode1
DiffWave: A Versatile Diffusion Model for Audio SynthesisCode1
Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE ParadigmCode1
A Neuro-AI Interface for Evaluating Generative Adversarial NetworksCode1
Digital Voicing of Silent SpeechCode1
AnCoGen: Analysis, Control and Generation of Speech with a Masked AutoencoderCode1
Accented Text-to-Speech Synthesis with a Conditional Variational AutoencoderCode1
Developing multilingual speech synthesis system for Ojibwe, Mi'kmaq, and MaliseetCode1
Learning Individual Speaking Styles for Accurate Lip to Speech SynthesisCode1
KazakhTTS: An Open-Source Kazakh Text-to-Speech Synthesis DatasetCode1
ITAcotron 2: Transfering English Speech Synthesis Architectures and Speech Features to ItalianCode1
KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech SynthesisCode1
Deep Speech Synthesis from Articulatory RepresentationsCode1
Show:102550
← PrevPage 3 of 25Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PeriodWave-Turbo-LPESQ4.45Unverified
2BigVGAN-v2PESQ4.36Unverified
3EVA-GAN-bigPESQ4.35Unverified
4PeriodWave + FreeUPESQ4.25Unverified
5RFWavePESQ4.23Unverified
6BigVSAN (w/ snakebeta)PESQ4.12Unverified
7BigVSANPESQ4.12Unverified
8EVA-GAN-basePESQ4.03Unverified
9BigVGANPESQ4.03Unverified
10VocosPESQ3.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score4.53Unverified
2WaveNet (Linguistic)Mean Opinion Score4.34Unverified
3WaveNet (L+F)Mean Opinion Score4.21Unverified
4TacotronMean Opinion Score4Unverified
5HMM-driven concatenativeMean Opinion Score3.86Unverified
6LSTM-RNN parametricMean Opinion Score3.67Unverified
7meansMean Opinion Score0Unverified
#ModelMetricClaimedVerifiedStatus
1BDDM vocoderMean Opinion Score4.48Unverified
2DiffWave LARGEMean Opinion Score4.44Unverified
3Neural HMMMean Opinion Score3.24Unverified
4Neural HMM Ablation with 1 state per phoneMean Opinion Score2.68Unverified
#ModelMetricClaimedVerifiedStatus
1WaveNet (L+F)Mean Opinion Score4.08Unverified
2LSTM-RNN parametricMean Opinion Score3.79Unverified
3HMM-driven concatenativeMean Opinion Score3.47Unverified
#ModelMetricClaimedVerifiedStatus
1SampleRNN (2-tier)NLL1.39Unverified
2SampleRNN (3-tier)NLL1.39Unverified