SOTAVerified

MTCRNN: A multi-scale RNN for directed audio texture synthesis

2020-11-25Unverified0· sign in to hype

M. Huzaifah, L. Wyse

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Audio textures are a subset of environmental sounds, often defined as having stable statistical characteristics within an adequately large window of time but may be unstructured locally. They include common everyday sounds such as from rain, wind, and engines. Given that these complex sounds contain patterns on multiple timescales, they are a challenge to model with traditional methods. We introduce a novel modelling approach for textures, combining recurrent neural networks trained at different levels of abstraction with a conditioning strategy that allows for user-directed synthesis. We demonstrate the model's performance on a variety of datasets, examine its performance on various metrics, and discuss some potential applications.

Tasks

Reproductions