SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101125 of 523 papers

TitleStatusHype
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based AgentsCode3
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
BadHMP: Backdoor Attack against Human Motion Prediction0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian NoiseCode0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadMerging: Backdoor Attacks Against Model MergingCode1
Show:102550
← PrevPage 5 of 21Next →

No leaderboard results yet.