SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301310 of 523 papers

TitleStatusHype
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
UNICORN: A Unified Backdoor Trigger Inversion FrameworkCode1
Rethinking the Trigger-injecting Position in Graph Backdoor Attack0
Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement LearningCode0
Backdoor Attacks with Input-unique Triggers in NLP0
Influencer Backdoor Attack on Semantic SegmentationCode1
Show:102550
← PrevPage 31 of 53Next →

No leaderboard results yet.