SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 171180 of 523 papers

TitleStatusHype
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Backdoor Attack with Mode Mixture Latent Modification0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
Defending against Backdoor Attacks in Natural Language Generation0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Backdoor Attack with Imperceptible Input and Latent Modification0
Show:102550
← PrevPage 18 of 53Next →

No leaderboard results yet.