SOTAVerified

Text-To-Speech Synthesis

Text-To-Speech Synthesis is a machine learning task that involves converting written text into spoken words. The goal is to generate synthetic speech that sounds natural and resembles human speech as closely as possible.

Papers

Showing 76100 of 332 papers

TitleStatusHype
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyoneCode1
Towards Lifelong Learning of Multilingual Text-To-Speech SynthesisCode0
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech SynthesisCode0
The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation SystemsCode0
Tools and resources for Romanian text-to-speech and speech-to-text applicationsCode0
Systematic Inequalities in Language Technology Performance across the World's LanguagesCode0
Comparison of Speech Representations for Automatic Quality Estimation in Multi-Speaker Text-to-Speech SynthesisCode0
Systematic Inequalities in Language Technology Performance across the World’s LanguagesCode0
Speech Synthesis from Text and Ultrasound Tongue Image-based Articulatory InputCode0
PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive PriorCode0
Preparing an Endangered Language for the Digital Age: The Case of Judeo-SpanishCode0
Attentive Multi-Layer Perceptron for Non-autoregressive GenerationCode0
Spoofing Speaker Verification Systems with Deep Multi-speaker Text-to-speech SynthesisCode0
Voicebox: Text-Guided Multilingual Universal Speech Generation at ScaleCode0
Multimodal Latent Language Modeling with Next-Token DiffusionCode0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic ForgettingCode0
Non-Autoregressive Neural Text-to-SpeechCode0
Mlphon: A Multifunctional Grapheme-Phoneme Conversion Tool Using Finite State TransducersCode0
Meta Learning Text-to-Speech Synthesis in over 7000 LanguagesCode0
Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systemsCode0
MIA-Prognosis: A Deep Learning Framework to Predict Therapy ResponseCode0
ECAPA-TDNN for Multi-speaker Text-to-speech SynthesisCode0
Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition ErrorsCode0
MelNet: A Generative Model for Audio in the Frequency DomainCode0
Investigation of enhanced Tacotron text-to-speech synthesis systems with self-attention for pitch accent languageCode0
Show:102550
← PrevPage 4 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NaturalSpeechAudio Quality MOS4.56Unverified
2VITSAudio Quality MOS4.43Unverified
3Grad-TTS + HiFiGAN (1000 steps)Audio Quality MOS4.37Unverified
4FastSpeech 2 + HiFiGANAudio Quality MOS4.34Unverified
5Glow-TTS + HiFiGANAudio Quality MOS4.34Unverified
6FastSpeech 2 + HiFiGANAudio Quality MOS4.32Unverified
7FastDiff (4 steps)Audio Quality MOS4.28Unverified
8FastDiff-TTSAudio Quality MOS4.03Unverified
9Transformer TTS (Mel + WaveGlow)Audio Quality MOS3.88Unverified
10FastSpeech (Mel + WaveGlow)Audio Quality MOS3.84Unverified
#ModelMetricClaimedVerifiedStatus
1Mia10-keyword Speech Commands dataset16Unverified
#ModelMetricClaimedVerifiedStatus
1Token-Level Ensemble DistillationPhoneme Error Rate4.6Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.74Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.49Unverified
#ModelMetricClaimedVerifiedStatus
1Match-TTSGMOS3.7Unverified