SOTAVerified

PL-EESR: Perceptual Loss Based END-TO-END Robust Speaker Representation Extraction

2021-10-03Code Available0· sign in to hype

Yi Ma, Kong Aik Lee, Ville Hautamaki, Haizhou Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Speech enhancement aims to improve the perceptual quality of the speech signal by suppression of the background noise. However, excessive suppression may lead to speech distortion and speaker information loss, which degrades the performance of speaker embedding extraction. To alleviate this problem, we propose an end-to-end deep learning framework, dubbed PL-EESR, for robust speaker representation extraction. This framework is optimized based on the feedback of the speaker identification task and the high-level perceptual deviation between the raw speech signal and its noisy version. We conducted speaker verification tasks in both noisy and clean environment respectively to evaluate our system. Compared to the baseline, our method shows better performance in both clean and noisy environments, which means our method can not only enhance the speaker relative information but also avoid adding distortions.

Tasks

Reproductions