Speaker Diarization: Using Recurrent Neural Networks
2020-06-10Code Available1· sign in to hype
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vishalshar/SpeakerDiarization_RNN_CNN_LSTMOfficialIn papertf★ 64
Abstract
Speaker Diarization is the problem of separating speakers in an audio. There could be any number of speakers and final result should state when speaker starts and ends. In this project, we analyze given audio file with 2 channels and 2 speakers (on separate channel). We train Neural Network for learning when a person is speaking. We use different type of Neural Networks specifically, Single Layer Perceptron (SLP), Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Convolution Neural Network (CNN) we achieve 92\% of accuracy with RNN. The code for this project is available at https://github.com/vishalshar/SpeakerDiarization_RNN_CNN_LSTM