SOTAVerified

Speech Synthesis

Speech synthesis is the task of generating speech from some other modality like text, lip movements etc.

Please note that the leaderboards here are not really comparable between studies - as they use mean opinion score as a metric and collect different samples from Amazon Mechnical Turk.

( Image credit: WaveNet: A generative model for raw audio )

Papers

Showing 576600 of 1249 papers

TitleStatusHype
VANI: Very-lightweight Accent-controllable TTS for Native and Non-native speakers with Identity Preservation0
Do Prosody Transfer Models Transfer Prosody?0
FoundationTTS: Text-to-Speech for ASR Customization with Generative Language Model0
DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction0
ParrotTTS: Text-to-Speech synthesis by exploiting self-supervised representations0
On the Audio-visual Synchronization for Lip-to-Speech Synthesis0
UniFLG: Unified Facial Landmark Generator from Text or Speech0
ClArTTS: An Open-Source Classical Arabic Text-to-Speech Corpus0
CrossSpeech: Speaker-independent Acoustic Representation for Cross-lingual Speech Synthesis0
Fast and small footprint Hybrid HMM-HiFiGAN based system for speech synthesis in Indian languages0
Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design0
UzbekTagger: The rule-based POS tagger for Uzbek language0
Time out of Mind: Generating Rate of Speech conditioned on emotion and speakerCode0
On granularity of prosodic representations in expressive text-to-speech0
Multilingual Multiaccented Multispeaker TTS with RADTTS0
Regeneration Learning: A Learning Paradigm for Data Generation0
Applying Automated Machine Translation to Educational Video Courses0
ReVISE: Self-Supervised Speech Resynthesis With Visual Input for Universal and Generalized Speech Regeneration0
HMM-based data augmentation for E2E systems for building conversational speech synthesis systems0
ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement0
Investigation of Japanese PnG BERT language model in text-to-speech synthesis for pitch accent language0
Text-to-speech synthesis based on latent variable conversion using diffusion probabilistic model and variational autoencoder0
Style-Label-Free: Cross-Speaker Style Transfer by Quantized VAE and Speaker-wise Normalization in Speech Synthesis0
SNAC: Speaker-normalized affine coupling layer in flow-based architecture for zero-shot multi-speaker text-to-speech0
VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing0
Show:102550
← PrevPage 24 of 50Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PeriodWave-Turbo-LPESQ4.45Unverified
2BigVGAN-v2PESQ4.36Unverified
3EVA-GAN-bigPESQ4.35Unverified
4PeriodWave + FreeUPESQ4.25Unverified
5RFWavePESQ4.23Unverified
6BigVSAN (w/ snakebeta)PESQ4.12Unverified
7BigVSANPESQ4.12Unverified
8EVA-GAN-basePESQ4.03Unverified
9BigVGANPESQ4.03Unverified
10VocosPESQ3.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score4.53Unverified
2WaveNet (Linguistic)Mean Opinion Score4.34Unverified
3WaveNet (L+F)Mean Opinion Score4.21Unverified
4TacotronMean Opinion Score4Unverified
5HMM-driven concatenativeMean Opinion Score3.86Unverified
6LSTM-RNN parametricMean Opinion Score3.67Unverified
7meansMean Opinion Score0Unverified
#ModelMetricClaimedVerifiedStatus
1BDDM vocoderMean Opinion Score4.48Unverified
2DiffWave LARGEMean Opinion Score4.44Unverified
3Neural HMMMean Opinion Score3.24Unverified
4Neural HMM Ablation with 1 state per phoneMean Opinion Score2.68Unverified
#ModelMetricClaimedVerifiedStatus
1WaveNet (L+F)Mean Opinion Score4.08Unverified
2LSTM-RNN parametricMean Opinion Score3.79Unverified
3HMM-driven concatenativeMean Opinion Score3.47Unverified
#ModelMetricClaimedVerifiedStatus
1SampleRNN (2-tier)NLL1.39Unverified
2SampleRNN (3-tier)NLL1.39Unverified