SOTAVerified

SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

2024-09-23Code Available0· sign in to hype

Minyeong Choe, Cheolhee Park, Changho Seo, Hyunil Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Federated Learning is a promising approach for training machine learning models while preserving data privacy, but its distributed nature makes it vulnerable to backdoor attacks, particularly in NLP tasks while related research remains limited. This paper introduces SDBA, a novel backdoor attack mechanism designed for NLP tasks in FL environments. Our systematic analysis across LSTM and GPT-2 models identifies the most vulnerable layers for backdoor injection and achieves both stealth and long-lasting durability through layer-wise gradient masking and top-k% gradient masking within these layers. Experiments on next token prediction and sentiment analysis tasks show that SDBA outperforms existing backdoors in durability and effectively bypasses representative defense mechanisms, with notable performance in LLM such as GPT-2. These results underscore the need for robust defense strategies in NLP-based FL systems.

Tasks

Reproductions