SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301350 of 523 papers

TitleStatusHype
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
UNICORN: A Unified Backdoor Trigger Inversion FrameworkCode1
Rethinking the Trigger-injecting Position in Graph Backdoor Attack0
Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement LearningCode0
Backdoor Attacks with Input-unique Triggers in NLP0
Influencer Backdoor Attack on Semantic SegmentationCode1
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
Backdoor Defense via Deconfounded Representation LearningCode1
Learning to Backdoor Federated LearningCode0
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions0
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial BiasCode0
A semantic backdoor attack against Graph Convolutional Networks0
FreeEagle: Detecting Complex Neural Trojans in Data-Free CasesCode1
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger0
SATBA: An Invisible Backdoor Attack Based On Spatial Attention0
Defending Against Backdoor Attacks by Layer-wise Feature AnalysisCode0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective0
On Feasibility of Server-side Backdoor Attacks on Split Learning0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Unnoticeable Backdoor Attacks on Graph Neural NetworksCode1
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks0
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Backdoor Attacks in Peer-to-Peer Federated Learning0
On the Vulnerability of Backdoor Defenses for Federated LearningCode1
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseCode1
Universal Detection of Backdoor Attacks via Density-based Clustering and Centroids AnalysisCode0
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Backdoor Attacks Against Dataset DistillationCode1
SSDA: Secure Source-Free Domain AdaptationCode0
You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks?0
Color Backdoor: A Robust Poisoning Attack in Color SpaceCode0
Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge ComputingCode0
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks0
VSVC: Backdoor attack against Keyword Spotting based on Voiceprint Selection and Voice Conversion0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
How to Backdoor Diffusion Models?Code1
Be Careful with Rotation: A Uniform Backdoor Pattern for 3D Shape0
BadPrompt: Backdoor Attacks on Continuous PromptsCode1
A Survey on Backdoor Attack and Defense in Natural Language Processing0
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identificationCode0
PBSM: Backdoor attack against Keyword spotting based on pitch boosting and sound masking0
Show:102550
← PrevPage 7 of 11Next →

No leaderboard results yet.