SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 401410 of 523 papers

TitleStatusHype
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationCode1
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks0
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
Show:102550
← PrevPage 41 of 53Next →

No leaderboard results yet.