SOTAVerified

Improving Polyphonic Music Models with Feature-Rich Encoding

2019-11-26Code Available0· sign in to hype

Omar Peracha

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper explores sequential modelling of polyphonic music with deep neural networks. While recent breakthroughs have focussed on network architecture, we demonstrate that the representation of the sequence can make an equally significant contribution to the performance of the model as measured by validation set loss. By extracting salient features inherent to the training dataset, the model can either be conditioned on these features or trained to predict said features as extra components of the sequences being modelled. We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modelled, can improve overall model performance significantly. We first introduce TonicNet, a GRU-based model trained to initially predict the chord at a given time-step before then predicting the notes of each voice at that time-step, in contrast with the typical approach of predicting only the notes. We then evaluate TonicNet on the canonical JSB Chorales dataset and obtain state-of-the-art results.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
JSB ChoralesTonicNetNLL0.21Unverified
JSB ChoralesTonicNetNLL0.22Unverified

Reproductions