SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 8190 of 523 papers

TitleStatusHype
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Backdoor Attacks to Graph Neural NetworksCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
Hidden Trigger Backdoor AttacksCode1
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Show:102550
← PrevPage 9 of 53Next →

No leaderboard results yet.