SOTAVerified

pyannote.audio: neural building blocks for speaker diarization

2019-11-04Code Available3· sign in to hype

Hervé Bredin, Ruiqing Yin, Juan Manuel Coria, Gregory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, Marie-Philippe Gill

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce pyannote.audio, an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it provides a set of trainable end-to-end neural building blocks that can be combined and jointly optimized to build speaker diarization pipelines. pyannote.audio also comes with pre-trained models covering a wide range of domains for voice activity detection, speaker change detection, overlapped speech detection, and speaker embedding -- reaching state-of-the-art performance for most of them.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AMIpyannote (waveform)DER(%)6Unverified
AMIpyannote (MFCC)DER(%)6.3Unverified
DIHARDpyannote (MFCC)DER(%)10.5Unverified
DIHARDpyannote (waveform)DER(%)9.9Unverified
DIHARDBaseline (the best result in the literature as of Oct.2019)DER(%)11.2Unverified
ETAPEpyannote (MFCC)DER(%)5.6Unverified
ETAPEBaselineDER(%)7.7Unverified
ETAPEpyannote (waveform)DER(%)4.9Unverified

Reproductions