SOTAVerified

GPU-accelerated Guided Source Separation for Meeting Transcription

2022-12-10Code Available1· sign in to hype

Desh Raj, Daniel Povey, Sanjeev Khudanpur

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Guided source separation (GSS) is a type of target-speaker extraction method that relies on pre-computed speaker activities and blind source separation to perform front-end enhancement of overlapped speech signals. It was first proposed during the CHiME-5 challenge and provided significant improvements over the delay-and-sum beamforming baseline. Despite its strengths, however, the method has seen limited adoption for meeting transcription benchmarks primarily due to its high computation time. In this paper, we describe our improved implementation of GSS that leverages the power of modern GPU-based pipelines, including batched processing of frequencies and segments, to provide 300x speed-up over CPU-based inference. The improved inference time allows us to perform detailed ablation studies over several parameters of the GSS algorithm -- such as context duration, number of channels, and noise class, to name a few. We provide end-to-end reproducible pipelines for speaker-attributed transcription of popular meeting benchmarks: LibriCSS, AMI, and AliMeeting. Our code and recipes are publicly available: https://github.com/desh2608/gss.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LibriCSSGSS + TransducerWord Error Rate (WER)3.3Unverified

Reproductions