SOTAVerified

Attentive max feature map and joint training for acoustic scene classification

2021-04-15Unverified0· sign in to hype

Hye-jin Shim, Jee-weon Jung, Ju-ho Kim, Ha-Jin Yu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Various attention mechanisms are being widely applied to acoustic scene classification. However, we empirically found that the attention mechanism can excessively discard potentially valuable information, despite improving performance. We propose the attentive max feature map that combines two effective techniques, attention and a max feature map, to further elaborate the attention mechanism and mitigate the above-mentioned phenomenon. We also explore various joint training methods, including multi-task learning, that allocate additional abstract labels for each audio recording. Our proposed system demonstrates state-of-the-art performance for single systems on Subtask A of the DCASE 2020 challenge by applying the two proposed techniques using relatively fewer parameters. Furthermore, adopting the proposed attentive max feature map, our team placed fourth in the recent DCASE 2021 challenge.

Tasks

Reproductions