SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 7180 of 523 papers

TitleStatusHype
Backdoor Attacks to Graph Neural NetworksCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
CL-Attack: Textual Backdoor Attacks via Cross-Lingual TriggersCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Backdoor Attacks on Crowd CountingCode1
Clean-Label Backdoor Attacks on Video Recognition ModelsCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
Show:102550
← PrevPage 8 of 53Next →

No leaderboard results yet.