SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 351375 of 523 papers

TitleStatusHype
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosCode0
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios0
Mitigating Backdoor Attack Via Prerequisite Transformation0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients InspectionCode0
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
BadVFL: Backdoor Attacks in Vertical Federated Learning0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Rethinking the Trigger-injecting Position in Graph Backdoor Attack0
Recover Triggered States: Protect Model Against Backdoor Attack in Reinforcement LearningCode0
Backdoor Attacks with Input-unique Triggers in NLP0
Show:102550
← PrevPage 15 of 21Next →

No leaderboard results yet.