SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 321330 of 523 papers

TitleStatusHype
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Show:102550
← PrevPage 33 of 53Next →

No leaderboard results yet.