SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 311320 of 523 papers

TitleStatusHype
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Learning to Backdoor Federated LearningCode0
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
A semantic backdoor attack against Graph Convolutional Networks0
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Show:102550
← PrevPage 32 of 53Next →

No leaderboard results yet.