Kapre: On-GPU Audio Preprocessing Layers for a Quick Implementation of Deep Neural Network Models with Keras
2017-06-19Code Available0· sign in to hype
Keunwoo Choi, Deokjin Joo, Ju-ho Kim
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/keunwoochoi/kapreOfficialIn papertf★ 0
- github.com/seth814/Audio-Classificationtf★ 0
- github.com/RishitJainn/Music-Genre-Classification-ChatBottf★ 0
- github.com/morningkaya/Audio-Classification2tf★ 0
- github.com/ritiksharma373/Music_genre_classification_chatbottf★ 0
- github.com/godisloveforme/instrumentClassifertf★ 0
- github.com/Otochess/Audiotf★ 0
Abstract
We introduce Kapre, Keras layers for audio and music signal preprocessing. Music research using deep neural networks requires a heavy and tedious preprocessing stage, for which audio processing parameters are often ignored in parameter optimisation. To solve this problem, Kapre implements time-frequency conversions, normalisation, and data augmentation as Keras layers. We report simple benchmark results, showing real-time on-GPU preprocessing adds a reasonable amount of computation.