SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 371380 of 523 papers

TitleStatusHype
BadVFL: Backdoor Attacks in Vertical Federated Learning0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Rethinking the Trigger-injecting Position in Graph Backdoor Attack0
Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement LearningCode0
Backdoor Attacks with Input-unique Triggers in NLP0
Learning to Backdoor Federated LearningCode0
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
A semantic backdoor attack against Graph Convolutional Networks0
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger0
Show:102550
← PrevPage 38 of 53Next →

No leaderboard results yet.