SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 311320 of 523 papers

TitleStatusHype
Data-centric NLP Backdoor Defense from the Lens of Memorization0
SPBA: Utilizing Speech Large Language Model for Backdoor Attacks on Speech Classification Models0
A4O: All Trigger for One sample0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Effective backdoor attack on graph neural networks in link prediction tasks0
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification0
A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
A clean-label graph backdoor attack method in node classification task0
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Show:102550
← PrevPage 32 of 53Next →

No leaderboard results yet.