Sams-Net: A Sliced Attention-based Neural Network for Music Source Separation
Tingle Li, Jia-Wei Chen, Haowen Hou, Ming Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Convolutional Neural Network (CNN) or Long short-term memory (LSTM) based models with the input of spectrogram or waveforms are commonly used for deep learning based audio source separation. In this paper, we propose a Sliced Attention-based neural network (Sams-Net) in the spectrogram domain for the music source separation task. It enables spectral feature interactions with multi-head attention mechanism, achieves easier parallel computing and has a larger receptive field compared with LSTMs and CNNs respectively. Experimental results on the MUSDB18 dataset show that the proposed method, with fewer parameters, outperforms most of the state-of-the-art DNN-based methods.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MUSDB18 | Sams-Net | SDR (avg) | 5.65 | — | Unverified |