SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 441450 of 523 papers

TitleStatusHype
Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained ModelsCode0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks0
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Show:102550
← PrevPage 45 of 53Next →

No leaderboard results yet.