SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101125 of 523 papers

TitleStatusHype
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Deep Feature Space Trojan Attack of Neural Networks by Controlled DetoxificationCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Graph BackdoorCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of SoundCode1
Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery DetectionCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding IndistinguishabilityCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Backdoor Attack against Speaker VerificationCode1
BadEdit: Backdooring large language models by model editingCode1
Backdoor Attack in the Physical World0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadHMP: Backdoor Attack against Human Motion Prediction0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
BadNL: Backdoor Attacks Against NLP Models0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
Show:102550
← PrevPage 5 of 21Next →

No leaderboard results yet.