SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 361370 of 523 papers

TitleStatusHype
Backdoor Attacks in Peer-to-Peer Federated Learning0
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution0
Backdoor Attacks on the DNN Interpretation System0
Backdoor Attacks with Input-unique Triggers in NLP0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Backdoor Attack with Imperceptible Input and Latent Modification0
Backdoor Attack with Mode Mixture Latent Modification0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning0
Show:102550
← PrevPage 37 of 53Next →

No leaderboard results yet.