SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 391400 of 523 papers

TitleStatusHype
BadDet: Backdoor Attacks on Object DetectionCode0
BagFlip: A Certified Defense against Data PoisoningCode0
BITE: Textual Backdoor Attacks with Iterative Trigger InjectionCode0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution0
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Model-Contrastive Learning for Backdoor DefenseCode0
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained ModelsCode0
Show:102550
← PrevPage 40 of 53Next →

No leaderboard results yet.