SOTAVerified

Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances

2021-04-04Code Available1· sign in to hype

Chang Zeng, Xin Wang, Erica Cooper, Xiaoxiao Miao, Junichi Yamagishi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Probabilistic linear discriminant analysis (PLDA) or cosine similarity have been widely used in traditional speaker verification systems as back-end techniques to measure pairwise similarities. To make better use of multiple enrollment utterances, we propose a novel attention back-end model, which can be used for both text-independent (TI) and text-dependent (TD) speaker verification, and employ scaled-dot self-attention and feed-forward self-attention networks as architectures that learn the intra-relationships of the enrollment utterances. In order to verify the proposed attention back-end, we conduct a series of experiments on CNCeleb and VoxCeleb datasets by combining it with several sate-of-the-art speaker encoders including TDNN and ResNet. Experimental results using multiple enrollment utterances on CNCeleb show that the proposed attention back-end model leads to lower EER and minDCF score than the PLDA and cosine similarity counterparts for each speaker encoder and an experiment on VoxCeleb indicate that our model can be used even for single enrollment case.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CN-CELEBX-Vectors with Attention BackendEER10.12Unverified
CN-CELEBResNet with Attention BackendEER10.77Unverified

Reproductions