Attention is All You Need in Speech Separation
Cem Subakan, Mirco Ravanelli, Samuele Cornell, Mirko Bronzi, Jianyuan Zhong
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/speechbrain/speechbrain/tree/develop/recipes/WSJ0Mix/separationOfficialpytorch★ 0
- github.com/Zhongyang-debug/Attention-Is-All-You-Need-In-Speech-Separationpytorch★ 80
- github.com/SungFeng-Huang/SSL-pretraining-separationpytorch★ 63
- github.com/2024-MindSpore-1/Code3/tree/main/Sepformermindspore★ 0
Abstract
Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| WSJ0-2mix | SepFormer | SI-SDRi | 22.3 | — | Unverified |
| WSJ0-3mix | SepFormer | SI-SDRi | 19.5 | — | Unverified |