Coupled Recurrent Models for Polyphonic Music Composition
2018-11-20Unverified0· sign in to hype
John Thickstun, Zaid Harchaoui, Dean P. Foster, Sham M. Kakade
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper introduces a novel recurrent model for music composition that is tailored to the structure of polyphonic music. We propose an efficient new conditional probabilistic factorization of musical scores, viewing a score as a collection of concurrent, coupled sequences: i.e. voices. To model the conditional distributions, we borrow ideas from both convolutional and recurrent neural models; we argue that these ideas are natural for capturing music's pitch invariances, temporal structure, and polyphony. We train models for single-voice and multi-voice composition on 2,300 scores from the KernScores dataset.