SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 511520 of 523 papers

TitleStatusHype
Dynamic Backdoor Attacks Against Machine Learning Models0
Clean-Label Backdoor Attacks on Video Recognition ModelsCode1
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing0
Defending against Backdoor Attack on Deep Neural Networks0
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks0
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations0
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy0
Defending Neural Backdoors via Generative Distribution ModelingCode0
Hidden Trigger Backdoor AttacksCode1
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks0
Show:102550
← PrevPage 52 of 53Next →

No leaderboard results yet.