SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 251300 of 523 papers

TitleStatusHype
Composite Backdoor Attacks Against Large Language ModelsCode1
Moiré Backdoor Attack (MBA): A Novel Trigger for Pedestrian Detectors in the Physical World0
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning0
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade DefenseCode0
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language ModelsCode1
Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic ClassificationCode0
Robust Backdoor Attacks on Object Detection in Real World0
Physical Invisible Backdoor Based on Camera Imaging0
MASTERKEY: Practical Backdoor Attack Against Speaker Verification Systems0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
EventTrojan: Manipulating Non-Intrusive Speech Quality Assessment via Imperceptible Events0
FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on Federated Learning0
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
MDTD: A Multi Domain Trojan Detector for Deep Neural NetworksCode0
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model ModificationCode1
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation0
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition0
DFB: A Data-Free, Low-Budget, and High-Efficacy Clean-Label Backdoor AttackCode0
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers0
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt InjectionCode1
BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative ModelsCode1
You Can Backdoor Personalized Federated LearningCode1
Beating Backdoor Attack at Its Own GameCode0
Adversarial Feature Map Pruning for BackdoorCode0
Risk-optimized Outlier Removal for Robust 3D Point Cloud ClassificationCode1
Rethinking Backdoor Attacks0
Attacking by Aligning: Clean-Label Backdoor Attacks on Object DetectionCode0
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of SoundCode1
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Hidden Backdoor Attack against Deep Learning-Based Wireless Signal Modulation Classifiers0
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks0
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosCode0
Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios0
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion ModelsCode1
Mitigating Backdoor Attack Via Prerequisite Transformation0
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
UOR: Universal Backdoor Attacks on Pre-trained Language Models0
Backdoor Attack with Sparse and Invisible TriggerCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks0
Defending against Insertion-based Textual Backdoor Attacks via AttributionCode0
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Show:102550
← PrevPage 6 of 11Next →

No leaderboard results yet.