SOTAVerified

Speech Synthesis

Speech synthesis is the task of generating speech from some other modality like text, lip movements etc.

Please note that the leaderboards here are not really comparable between studies - as they use mean opinion score as a metric and collect different samples from Amazon Mechnical Turk.

( Image credit: WaveNet: A generative model for raw audio )

Papers

Showing 151200 of 1249 papers

TitleStatusHype
Effective Deep Learning Models for Automatic Diacritization of Arabic TextCode1
Accented Text-to-Speech Synthesis with a Conditional Variational AutoencoderCode1
Automatic Prosody Annotation with Pre-Trained Text-Speech ModelCode1
EdiTTS: Score-based Editing for Controllable Text-to-SpeechCode1
RAD-TTS: Parallel Flow-Based TTS with Robust Alignment Learning and Diverse SynthesisCode1
Neural HMMs are all you need (for high-quality attention-free TTS)Code1
Disentanglement in a GAN for Unconditional Speech SynthesisCode1
Digital Voicing of Silent SpeechCode1
dMel: Speech Tokenization made SimpleCode1
Neural Text to Articulate Talk: Deep Text to Audiovisual Speech Synthesis achieving both Auditory and Photo-realismCode1
AnCoGen: Analysis, Control and Generation of Speech with a Masked AutoencoderCode1
Automatic Tuning of Loss Trade-offs without Hyper-parameter Search in End-to-End Zero-Shot Speech SynthesisCode1
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling SchemeCode1
EfficientNet-Absolute Zero for Continuous Speech Keyword SpottingCode1
A Neuro-AI Interface for Evaluating Generative Adversarial NetworksCode1
AutoDiff: combining Auto-encoder and Diffusion model for tabular data synthesizingCode1
DiffV2S: Diffusion-based Video-to-Speech Synthesis with Vision-guided Speaker EmbeddingCode1
Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found DataCode1
DiffProsody: Diffusion-based Latent Prosody Generation for Expressive Speech Synthesis with Prosody Conditional Adversarial TrainingCode1
DiffWave: A Versatile Diffusion Model for Audio SynthesisCode1
Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial NetworksCode1
TTS-Portuguese Corpus: a corpus for speech synthesis in Brazilian PortugueseCode1
Developing multilingual speech synthesis system for Ojibwe, Mi'kmaq, and MaliseetCode1
Detection of Prosodic Boundaries in Speech Using Wav2Vec 2.0Code1
NanoFlow: Scalable Normalizing Flows with Sublinear Parameter ComplexityCode1
Deep Speech Synthesis from MRI-Based Articulatory RepresentationsCode1
Exploring Transfer Learning for Low Resource Emotional TTSCode1
Deep Speech Synthesis from Articulatory RepresentationsCode1
Multilingual Byte2Speech Models for Scalable Low-resource Speech SynthesisCode1
Deep Learning Enabled Semantic Communications with Speech Recognition and SynthesisCode1
Dynamical Variational Autoencoders: A Comprehensive ReviewCode1
FastPitchFormant: Source-filter based Decomposed Modeling for Speech SynthesisCode1
Multilingual Text-to-Speech Synthesis for Turkic Languages Using TransliterationCode1
Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram PredictionsCode1
One Model, Many Languages: Meta-learning for Multilingual Text-to-SpeechCode1
Fine-grained style control in Transformer-based Text-to-speech SynthesisCode1
FonBund: A Library for Combining Cross-lingual Phonological Segment DataCode1
FMFCC-A: A Challenging Mandarin Dataset for Synthetic Speech DetectionCode1
Cross-modal information fusion for voice spoofing detectionCode1
Bts-e: Audio deepfake detection using breathing-talking-silence encoderCode1
Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-SpeechCode1
Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language ModelsCode1
Grad-TTS: A Diffusion Probabilistic Model for Text-to-SpeechCode1
MnTTS2: An Open-Source Multi-Speaker Mongolian Text-to-Speech Synthesis DatasetCode1
APNet2: High-quality and High-efficiency Neural Vocoder with Direct Prediction of Amplitude and Phase SpectraCode1
ControlVC: Zero-Shot Voice Conversion with Time-Varying Controls on Pitch and SpeedCode1
Mixer-TTS: non-autoregressive, fast and compact text-to-speech model conditioned on language model embeddingsCode1
ADAPTERMIX: Exploring the Efficacy of Mixture of Adapters for Low-Resource TTS AdaptationCode1
Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length EmbeddingCode1
Conditional Sound Generation Using Neural Discrete Time-Frequency Representation LearningCode1
Show:102550
← PrevPage 4 of 25Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PeriodWave-Turbo-LPESQ4.45Unverified
2BigVGAN-v2PESQ4.36Unverified
3EVA-GAN-bigPESQ4.35Unverified
4PeriodWave + FreeUPESQ4.25Unverified
5RFWavePESQ4.23Unverified
6BigVSAN (w/ snakebeta)PESQ4.12Unverified
7BigVSANPESQ4.12Unverified
8EVA-GAN-basePESQ4.03Unverified
9BigVGANPESQ4.03Unverified
10VocosPESQ3.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tacotron 2Mean Opinion Score4.53Unverified
2WaveNet (Linguistic)Mean Opinion Score4.34Unverified
3WaveNet (L+F)Mean Opinion Score4.21Unverified
4TacotronMean Opinion Score4Unverified
5HMM-driven concatenativeMean Opinion Score3.86Unverified
6LSTM-RNN parametricMean Opinion Score3.67Unverified
7meansMean Opinion Score0Unverified
#ModelMetricClaimedVerifiedStatus
1BDDM vocoderMean Opinion Score4.48Unverified
2DiffWave LARGEMean Opinion Score4.44Unverified
3Neural HMMMean Opinion Score3.24Unverified
4Neural HMM Ablation with 1 state per phoneMean Opinion Score2.68Unverified
#ModelMetricClaimedVerifiedStatus
1WaveNet (L+F)Mean Opinion Score4.08Unverified
2LSTM-RNN parametricMean Opinion Score3.79Unverified
3HMM-driven concatenativeMean Opinion Score3.47Unverified
#ModelMetricClaimedVerifiedStatus
1SampleRNN (2-tier)NLL1.39Unverified
2SampleRNN (3-tier)NLL1.39Unverified