SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451500 of 523 papers

TitleStatusHype
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
Neighboring Backdoor Attacks on Graph Convolutional Network0
Defending against Backdoor Attacks in Natural Language Generation0
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
Backdoor Attack with Imperceptible Input and Latent Modification0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
Backdoor Attack through Frequency DomainCode0
DBIA: Data-free Backdoor Injection Attack against Transformer NetworksCode0
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Widen The Backdoor To Let More Attackers In0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
BFClass: A Backdoor-free Text Classification Framework0
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain0
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers0
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting0
BadNL: Backdoor Attacks Against NLP Models0
Handcrafted Backdoors in Deep Neural Networks0
Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations0
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Stealthy Backdoors as Compression ArtifactsCode0
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Backdoor Attack in the Physical World0
PointBA: Towards Backdoor Attacks in 3D Point Cloud0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Hidden Backdoor Attack against Semantic Segmentation Models0
Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
Show:102550
← PrevPage 10 of 11Next →

No leaderboard results yet.