SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151160 of 523 papers

TitleStatusHype
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
Show:102550
← PrevPage 16 of 53Next →

No leaderboard results yet.