SOTAVerified

Contrastive Unsupervised Learning for Speech Emotion Recognition

2021-02-12Unverified0· sign in to hype

Mao Li, Bo Yang, Joshua Levy, Andreas Stolcke, Viktor Rozgic, Spyros Matsoukas, Constantinos Papayiannis, Daniel Bone, Chao Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP- Podcast dataset, our method obtained considerable performance improvements compared to baselines.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSP-Podcast (Activation)preCPCCCC0.71Unverified
MSP-Podcast (Dominance)preCPCCCC0.64Unverified
MSP-Podcast (Valence)preCPCCCC0.38Unverified

Reproductions