SOTAVerified

Text-To-Speech Synthesis

Text-To-Speech Synthesis is a machine learning task that involves converting written text into spoken words. The goal is to generate synthetic speech that sounds natural and resembles human speech as closely as possible.

Papers

Showing 151175 of 332 papers

TitleStatusHype
SLMGAN: Exploiting Speech Language Model Representations for Unsupervised Zero-Shot Voice Conversion in GANs0
High-Quality Automatic Voice Over with Accurate Alignment: Supervision through Self-Supervised Discrete Speech Units0
Voicebox: Text-Guided Multilingual Universal Speech Generation at ScaleCode0
ZET-Speech: Zero-shot adaptive Emotion-controllable Text-to-Speech Synthesis with Diffusion and Style-based Models0
VAKTA-SETU: A Speech-to-Speech Machine Translation Service in Select Indic Languages0
MParrotTTS: Multilingual Multi-speaker Text to Speech Synthesis in Low Resource Setting0
A unified front-end framework for English text-to-speech synthesis0
Accented Text-to-Speech Synthesis with Limited Data0
M2-CTTS: End-to-End Multi-scale Multi-modal Conversational Text-to-Speech Synthesis0
A Review of Deep Learning Techniques for Speech Processing0
Zero-shot text-to-speech synthesis conditioned using self-supervised speech representation model0
Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis0
A Survey on Audio Diffusion Models: Text To Speech Synthesis and Enhancement in Generative AI0
Controllable Prosody Generation With Partial Inputs0
Do Prosody Transfer Models Transfer Prosody?0
ParrotTTS: Text-to-Speech synthesis by exploiting self-supervised representations0
UzbekTagger: The rule-based POS tagger for Uzbek language0
Applying Automated Machine Translation to Educational Video Courses0
ReVISE: Self-Supervised Speech Resynthesis With Visual Input for Universal and Generalized Speech Regeneration0
ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement0
Text-to-speech synthesis based on latent variable conversion using diffusion probabilistic model and variational autoencoder0
Investigation of Japanese PnG BERT language model in text-to-speech synthesis for pitch accent language0
Grad-StyleSpeech: Any-speaker Adaptive Text-to-Speech Synthesis with Diffusion Models0
Technology Pipeline for Large Scale Cross-Lingual Dubbing of Lecture Videos into Multiple Indian Languages0
Virtuoso: Massive Multilingual Speech-Text Joint Semi-Supervised Learning for Text-To-Speech0
Show:102550
← PrevPage 7 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NaturalSpeechAudio Quality MOS4.56Unverified
2VITSAudio Quality MOS4.43Unverified
3Grad-TTS + HiFiGAN (1000 steps)Audio Quality MOS4.37Unverified
4FastSpeech 2 + HiFiGANAudio Quality MOS4.34Unverified
5Glow-TTS + HiFiGANAudio Quality MOS4.34Unverified
6FastSpeech 2 + HiFiGANAudio Quality MOS4.32Unverified
7FastDiff (4 steps)Audio Quality MOS4.28Unverified
8FastDiff-TTSAudio Quality MOS4.03Unverified
9Transformer TTS (Mel + WaveGlow)Audio Quality MOS3.88Unverified
10FastSpeech (Mel + WaveGlow)Audio Quality MOS3.84Unverified
#ModelMetricClaimedVerifiedStatus
1Mia10-keyword Speech Commands dataset16Unverified
#ModelMetricClaimedVerifiedStatus
1Token-Level Ensemble DistillationPhoneme Error Rate4.6Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.74Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.49Unverified
#ModelMetricClaimedVerifiedStatus
1Match-TTSGMOS3.7Unverified