SOTAVerified

Text-To-Speech Synthesis

Text-To-Speech Synthesis is a machine learning task that involves converting written text into spoken words. The goal is to generate synthetic speech that sounds natural and resembles human speech as closely as possible.

Papers

Showing 151200 of 332 papers

TitleStatusHype
FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech SynthesisCode2
The PartialSpoof Database and Countermeasures for the Detection of Short Fake Speech Segments Embedded in an Utterance0
SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis0
VQTTS: High-Fidelity Text-to-Speech Synthesis with Self-Supervised VQ Acoustic Feature0
Unsupervised Text-to-Speech Synthesis by Unsupervised Automatic Speech RecognitionCode1
Applying Syntaxx2013Prosody Mapping Hypothesis and Prosodic Well-Formedness Constraints to Neural Sequence-to-Sequence Speech Synthesis0
AutoTTS: End-to-End Text-to-Speech Synthesis through Differentiable Duration Modeling0
ECAPA-TDNN for Multi-speaker Text-to-speech SynthesisCode0
Text-free non-parallel many-to-many voice conversion using normalising flows0
iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating Inverse Short-Time Fourier TransformCode2
Generative Modeling for Low Dimensional Speech Attributes with Neural Spline FlowsCode2
Deep Performer: Score-to-Audio Music Performance Synthesis0
Multi-Stage Deep Transfer Learning for EmIoT-enabled Human-Computer Interaction0
Transformer-based Models of Text Normalization for Speech Applications0
Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios0
Multi-Singer: Fast Multi-Singer Singing Voice Vocoder With A Large-Scale CorpusCode1
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyoneCode1
Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance0
Systematic Inequalities in Language Technology Performance across the World's LanguagesCode0
Fine-grained style control in Transformer-based Text-to-speech SynthesisCode1
Towards Lifelong Learning of Multilingual Text-To-Speech SynthesisCode0
Environment Aware Text-to-Speech Synthesis0
EdiTTS: Score-based Editing for Controllable Text-to-SpeechCode1
Prosody-TTS: An end-to-end speech synthesis system with prosody control0
Neural Speech Synthesis in German0
PortaSpeech: Portable and High-Quality Generative Text-to-SpeechCode2
Conditioning Sequence-to-sequence Networks with Learned Activations0
Guided-TTS:Text-to-Speech with Untranscribed Speech0
Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network0
A Unified Transformer-based Framework for Duplex Text Normalization0
Extending Text-to-Speech Synthesis with Articulatory Movement Prediction using Ultrasound Tongue ImagingCode0
Location, Location: Enhancing the Evaluation of Text-to-Speech Synthesis Using the Rapid Prosody Transcription Paradigm0
Speech Synthesis from Text and Ultrasound Tongue Image-based Articulatory InputCode0
WaveGrad 2: Iterative Refinement for Text-to-Speech SynthesisCode1
RyanSpeech: A Corpus for Conversational Text-to-Speech SynthesisCode1
PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive PriorCode0
Enhancing Speaking Styles in Conversational Text-to-Speech Synthesis with Graph-based Multi-modal Context ModelingCode1
An objective evaluation of the effects of recording conditions and speaker characteristics in multi-speaker deep neural speech synthesis0
Speaker verification-derived loss and data augmentation for DNN-based multispeaker speech synthesis0
RAD-TTS: Parallel Flow-Based TTS with Robust Alignment Learning and Diverse SynthesisCode1
Dual Script E2E framework for Multilingual and Code-Switching ASR0
Grad-TTS: A Diffusion Probabilistic Model for Text-to-SpeechCode1
DiffSinger: Singing Voice Synthesis via Shallow Diffusion MechanismCode2
Phrase break prediction with bidirectional encoder representations in Japanese text-to-speech synthesisCode0
KazakhTTS: An Open-Source Kazakh Text-to-Speech Synthesis DatasetCode1
Enhancing Word-Level Semantic Representation via Dependency Structure for Expressive Text-to-Speech Synthesis0
Flavored Tacotron: Conditional Learning for Prosodic-linguistic Features0
Reinforcement Learning for Emotional Text-to-Speech Synthesis with Improved Emotion Discriminability0
PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS0
Continual Speaker Adaptation for Text-to-Speech Synthesis0
Show:102550
← PrevPage 4 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NaturalSpeechAudio Quality MOS4.56Unverified
2VITSAudio Quality MOS4.43Unverified
3Grad-TTS + HiFiGAN (1000 steps)Audio Quality MOS4.37Unverified
4FastSpeech 2 + HiFiGANAudio Quality MOS4.34Unverified
5Glow-TTS + HiFiGANAudio Quality MOS4.34Unverified
6FastSpeech 2 + HiFiGANAudio Quality MOS4.32Unverified
7FastDiff (4 steps)Audio Quality MOS4.28Unverified
8FastDiff-TTSAudio Quality MOS4.03Unverified
9Transformer TTS (Mel + WaveGlow)Audio Quality MOS3.88Unverified
10FastSpeech (Mel + WaveGlow)Audio Quality MOS3.84Unverified
#ModelMetricClaimedVerifiedStatus
1Mia10-keyword Speech Commands dataset16Unverified
#ModelMetricClaimedVerifiedStatus
1Token-Level Ensemble DistillationPhoneme Error Rate4.6Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.74Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score3.49Unverified
#ModelMetricClaimedVerifiedStatus
1Match-TTSGMOS3.7Unverified