SOTAVerified

Invariances and Data Augmentation for Supervised Music Transcription

2017-11-13Code Available0· sign in to hype

John Thickstun, Zaid Harchaoui, Dean Foster, Sham M. Kakade

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper explores a variety of models for frame-based music transcription, with an emphasis on the methods needed to reach state-of-the-art on human recordings. The translation-invariant network discussed in this paper, which combines a traditional filterbank with a convolutional neural network, was the top-performing model in the 2017 MIREX Multiple Fundamental Frequency Estimation evaluation. This class of models shares parameters in the log-frequency domain, which exploits the frequency invariance of music to reduce the number of model parameters and avoid overfitting to the training data. All models in this paper were trained with supervision by labeled data from the MusicNet dataset, augmented by random label-preserving pitch-shift transformations.

Tasks

Reproductions