SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 311320 of 523 papers

TitleStatusHype
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models0
Partial train and isolate, mitigate backdoor attack0
PBSM: Backdoor attack against Keyword spotting based on pitch boosting and sound masking0
Physical Invisible Backdoor Based on Camera Imaging0
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks0
PointBA: Towards Backdoor Attacks in 3D Point Cloud0
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers0
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
Poison in the Well: Feature Embedding Disruption in Backdoor Attacks0
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models0
Show:102550
← PrevPage 32 of 53Next →

No leaderboard results yet.