SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 221230 of 523 papers

TitleStatusHype
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian NoiseCode0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Show:102550
← PrevPage 23 of 53Next →

No leaderboard results yet.