SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 291300 of 523 papers

TitleStatusHype
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Mitigating Backdoor Attack Via Prerequisite Transformation0
Moiré Backdoor Attack (MBA): A Novel Trigger for Pedestrian Detectors in the Physical World0
Show:102550
← PrevPage 30 of 53Next →

No leaderboard results yet.