SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 441450 of 523 papers

TitleStatusHype
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
BFClass: A Backdoor-free Text Classification Framework0
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Show:102550
← PrevPage 45 of 53Next →

No leaderboard results yet.