MMM : Exploring Conditional Multi-Track Music Generation with the Transformer
Jeff Ens, Philippe Pasquier
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Natooz/MidiTokpytorch★ 857
- github.com/carlosholivan/musicaiznone★ 187
- github.com/AI-Guru/MMM-JSBtf★ 123
Abstract
We propose the Multi-Track Music Machine (MMM), a generative system based on the Transformer architecture that is capable of generating multi-track music. In contrast to previous work, which represents musical material as a single time-ordered sequence, where the musical events corresponding to different tracks are interleaved, we create a time-ordered sequence of musical events for each track and concatenate several tracks into a single sequence. This takes advantage of the Transformer's attention-mechanism, which can adeptly handle long-term dependencies. We explore how various representations can offer the user a high degree of control at generation time, providing an interactive demo that accommodates track-level and bar-level inpainting, and offers control over track instrumentation and note density.