SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 276300 of 523 papers

TitleStatusHype
Rethinking Backdoor Attacks0
Attacking by Aligning: Clean-Label Backdoor Attacks on Object DetectionCode0
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of SoundCode1
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosCode0
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios0
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion ModelsCode1
Mitigating Backdoor Attack Via Prerequisite Transformation0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
Backdoor Attack with Sparse and Invisible TriggerCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Show:102550
← PrevPage 12 of 21Next →

No leaderboard results yet.