SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 481490 of 523 papers

TitleStatusHype
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios0
Heterogeneous Graph Backdoor Attack0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Hidden Backdoor Attack against Semantic Segmentation Models0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack0
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Inferring Properties of Graph Neural Networks0
Show:102550
← PrevPage 49 of 53Next →

No leaderboard results yet.