SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 281290 of 523 papers

TitleStatusHype
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosCode0
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios0
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion ModelsCode1
Mitigating Backdoor Attack Via Prerequisite Transformation0
Show:102550
← PrevPage 29 of 53Next →

No leaderboard results yet.