SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451475 of 523 papers

TitleStatusHype
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
Neighboring Backdoor Attacks on Graph Convolutional Network0
Defending against Backdoor Attacks in Natural Language Generation0
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
Backdoor Attack with Imperceptible Input and Latent Modification0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
Backdoor Attack through Frequency DomainCode0
DBIA: Data-free Backdoor Injection Attack against Transformer NetworksCode0
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Widen The Backdoor To Let More Attackers In0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
Show:102550
← PrevPage 19 of 21Next →

No leaderboard results yet.