SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 176200 of 523 papers

TitleStatusHype
Backdoor Attack with Mode Mixture Latent Modification0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Deep Learning Backdoors0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
Backdoor Attack with Imperceptible Input and Latent Modification0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
Defending against Backdoor Attacks in Natural Language Generation0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
Show:102550
← PrevPage 8 of 21Next →

No leaderboard results yet.