SOTAVerified

Are Deep Speech Denoising Models Robust to Adversarial Noise?

2025-03-14Unverified0· sign in to hype

Will Schwarzer, Philip S. Thomas, Andrea Fanelli, Xiaoyu Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep noise suppression (DNS) models enjoy widespread use throughout a variety of high-stakes speech applications. However, in this paper, we show that four recent DNS models can each be reduced to outputting unintelligible gibberish through the addition of imperceptible adversarial noise. Furthermore, our results show the near-term plausibility of targeted attacks, which could induce models to output arbitrary utterances, and over-the-air attacks. While the success of these attacks varies by model and setting, and attacks appear to be strongest when model-specific (i.e., white-box and non-transferable), our results highlight a pressing need for practical countermeasures in DNS systems.

Tasks

Reproductions