SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 76100 of 523 papers

TitleStatusHype
Defending against Backdoors in Federated Learning with Robust Learning RateCode1
Backdoor Attacks on Crowd CountingCode1
Embedding and Extraction of Knowledge in Tree Ensemble ClassifiersCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Backdoor Attacks on Self-Supervised LearningCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Backdoor Attacks to Graph Neural NetworksCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Graph BackdoorCode1
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic TriggerCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attack with Sparse and Invisible TriggerCode1
Influencer Backdoor Attack on Semantic SegmentationCode1
Input-Aware Dynamic Backdoor AttackCode1
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Label Poisoning is All You NeedCode1
LIRA: Learnable, Imperceptible and Robust Backdoor AttacksCode1
BadEdit: Backdooring large language models by model editingCode1
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised LearningCode1
Neurotoxin: Durable Backdoors in Federated LearningCode1
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision TransformersCode1
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Show:102550
← PrevPage 4 of 21Next →

No leaderboard results yet.