SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201250 of 523 papers

TitleStatusHype
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Backdoor Attack on Vertical Federated Graph Neural Network Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
BadHMP: Backdoor Attack against Human Motion Prediction0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian NoiseCode0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor LearningCode0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Krait: A Backdoor Attack Against Graph Prompt Tuning0
Backdoor Attacks against Image-to-Image Networks0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
Backdoor Graph CondensationCode0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated LearningCode0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Backdooring Bias into Text-to-Image ModelsCode0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.