SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451460 of 523 papers

TitleStatusHype
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
Neighboring Backdoor Attacks on Graph Convolutional Network0
Defending against Backdoor Attacks in Natural Language Generation0
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Show:102550
← PrevPage 46 of 53Next →

No leaderboard results yet.