SOTAVerified

VoxBlink2: A 100K+ Speaker Recognition Corpus and the Open-Set Speaker-Identification Benchmark

2024-07-16Code Available5· sign in to hype

Yuke Lin, Ming Cheng, FuLin Zhang, Yingying Gao, Shilei Zhang, Ming Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we provide a large audio-visual speaker recognition dataset, VoxBlink2, which includes approximately 10M utterances with videos from 110K+ speakers in the wild. This dataset represents a significant expansion over the VoxBlink dataset, encompassing a broader diversity of speakers and scenarios by the grace of an optimized data collection pipeline. Afterward, we explore the impact of training strategies, data scale, and model complexity on speaker verification and finally establish a new single-model state-of-the-art EER at 0.170% and minDCF at 0.006% on the VoxCeleb1-O test set. Such remarkable results motivate us to explore speaker recognition from a new challenging perspective. We raise the Open-Set Speaker-Identification task, which is designed to either match a probe utterance with a known gallery speaker or categorize it as an unknown query. Associated with this task, we design concrete benchmark and evaluation protocols. The data and model resources can be found in http://voxblink2.github.io.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
VoxCelebSimAM-ResNet100EER0.2Unverified

Reproductions