SOTAVerified

SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic Transfer

2024-02-29Code Available0· sign in to hype

Pengzhou Cheng, Wei Du, Zongru Wu, Fengwei Zhang, Libo Chen, Zhuosheng Zhang, Gongshen Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Although pre-training achieves remarkable performance, it suffers from task-agnostic backdoor attacks due to vulnerabilities in data and training mechanisms. These attacks can transfer backdoors to various downstream tasks. In this paper, we introduce maxEntropy, an entropy-based poisoning filter that mitigates such risks. To overcome the limitations of manual target setting and explicit triggers, we propose SynGhost, an invisible and universal task-agnostic backdoor attack via syntactic transfer, further exposing vulnerabilities in pre-trained language models (PLMs). Specifically, SynGhost injects multiple syntactic backdoors into the pre-training space through corpus poisoning, while preserving the PLM's pre-training capabilities. Second, SynGhost adaptively selects optimal targets based on contrastive learning, creating a uniform distribution in the pre-training space. To identify syntactic differences, we also introduce an awareness module to minimize interference between backdoors. Experiments show that SynGhost poses significant threats and can transfer to various downstream tasks. Furthermore, SynGhost resists defenses based on perplexity, fine-pruning, and maxEntropy. The code is available at https://github.com/Zhou-CyberSecurity-AI/SynGhost.

Tasks

Reproductions