SOTAVerified

Towards Audio Domain Adaptation for Acoustic Scene Classification using Disentanglement Learning

2021-10-26Code Available0· sign in to hype

Jakob Abeßer, Meinard Müller

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The deployment of machine listening algorithms in real-life applications is often impeded by a domain shift caused for instance by different microphone characteristics. In this paper, we propose a novel domain adaptation strategy based on disentanglement learning. The goal is to disentangle task-specific and domain-specific characteristics in the analyzed audio recordings. In particular, we combine two strategies: First, we apply different binary masks to internal embedding representations and, second, we suggest a novel combination of categorical cross-entropy and variance-based losses. Our results confirm the disentanglement of both tasks on an embedding level but show only minor improvement in the acoustic scene classification performance, when training data from both domains can be used. As a second finding, we can confirm the effectiveness of a state-of-the-art unsupervised domain adaptation strategy, which performs across-domain adaptation on a feature-level instead.

Tasks

Reproductions