SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 5160 of 523 papers

TitleStatusHype
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
Backdoor Defense via Deconfounded Representation LearningCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Unnoticeable Backdoor Attacks on Graph Neural NetworksCode1
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Backdoor Attacks Against Dataset DistillationCode1
How to Backdoor Diffusion Models?Code1
Show:102550
← PrevPage 6 of 53Next →

No leaderboard results yet.