SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 471480 of 523 papers

TitleStatusHype
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
BFClass: A Backdoor-free Text Classification Framework0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Show:102550
← PrevPage 48 of 53Next →

No leaderboard results yet.