SOTAVerified

Rethinking Audio-visual Synchronization for Active Speaker Detection

2022-06-21Unverified0· sign in to hype

Abudukelimu Wuerkaixi, You Zhang, Zhiyao Duan, ChangShui Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Active speaker detection (ASD) systems are important modules for analyzing multi-talker conversations. They aim to detect which speakers or none are talking in a visual scene at any given time. Existing research on ASD does not agree on the definition of active speakers. We clarify the definition in this work and require synchronization between the audio and visual speaking activities. This clarification of definition is motivated by our extensive experiments, through which we discover that existing ASD methods fail in modeling the audio-visual synchronization and often classify unsynchronized videos as active speaking. To address this problem, we propose a cross-modal contrastive learning strategy and apply positional encoding in attention modules for supervised ASD models to leverage the synchronization cue. Experimental results suggest that our model can successfully detect unsynchronized speaking as not speaking, addressing the limitation of current models.

Tasks

Reproductions