SOTAVerified

Comparison and Analysis of Deep Audio Embeddings for Music Emotion Recognition

2021-04-13Unverified0· sign in to hype

Eunjeong Koh, Shlomo Dubnov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Emotion is a complicated notion present in music that is hard to capture even with fine-tuned feature engineering. In this paper, we investigate the utility of state-of-the-art pre-trained deep audio embedding methods to be used in the Music Emotion Recognition (MER) task. Deep audio embedding methods allow us to efficiently capture the high dimensional features into a compact representation. We implement several multi-class classifiers with deep audio embeddings to predict emotion semantics in music. We investigate the effectiveness of L3-Net and VGGish deep audio embedding methods for music emotion inference over four music datasets. The experiments with several classifiers on the task show that the deep audio embedding solutions can improve the performances of the previous baseline MER models. We conclude that deep audio embeddings represent musical emotion semantics for the MER task without expert human engineering.

Tasks

Reproductions