SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 241250 of 523 papers

TitleStatusHype
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
A4O: All Trigger for One sample0
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Backdoor Attacks on the DNN Interpretation System0
Federated Learning with Flexible Architectures0
A Robust Attack: Displacement Backdoor Attack0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
Show:102550
← PrevPage 25 of 53Next →

No leaderboard results yet.