SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 401450 of 523 papers

TitleStatusHype
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Narcissus: A Practical Clean-Label Backdoor Attack with Limited InformationCode1
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks0
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
Under-confidence Backdoors Are Resilient and Stealthy BackdoorsCode0
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy BreachesCode0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Training with More Confidence: Mitigating Injected and Natural Backdoors During TrainingCode1
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Few-Shot Backdoor Attacks on Visual Object TrackingCode1
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
Neighboring Backdoor Attacks on Graph Convolutional Network0
Defending against Backdoor Attacks in Natural Language Generation0
Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks0
FIBA: Frequency-Injection based Backdoor Attack in Medical Image AnalysisCode1
Backdoor Attack with Imperceptible Input and Latent Modification0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
DBIA: Data-free Backdoor Injection Attack against Transformer NetworksCode0
Backdoor Attack through Frequency DomainCode0
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences0
Triggerless Backdoor Attack for NLP Tasks with Clean LabelsCode1
Enhancing Backdoor Attacks with Multi-Level MMD RegularizationCode0
Backdoor Pre-trained Models Can Transfer to AllCode0
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial OutcomesCode1
Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge DistillationCode1
Anti-Backdoor Learning: Training Clean Models on Poisoned DataCode1
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style TransferCode1
Widen The Backdoor To Let More Attackers In0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models0
MARNET: Backdoor Attacks against Value-Decomposition Multi-Agent Reinforcement Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
BFClass: A Backdoor-free Text Classification Framework0
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Show:102550
← PrevPage 9 of 11Next →

No leaderboard results yet.