SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101150 of 523 papers

TitleStatusHype
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based AgentsCode3
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
BadHMP: Backdoor Attack against Human Motion Prediction0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian NoiseCode0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning0
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferCode0
Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks0
MEGen: Generative Backdoor in Large Language Models via Model Editing0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadMerging: Backdoor Attacks Against Model MergingCode1
BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt LearningCode2
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor LearningCode0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Krait: A Backdoor Attack Against Graph Prompt Tuning0
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge BasesCode3
Uncertainty is Fragile: Manipulating Uncertainty in Large Language ModelsCode1
Backdoor Attacks against Image-to-Image Networks0
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
T2IShield: Defending Against Backdoors on Text-to-Image Diffusion ModelsCode1
Backdoor Graph CondensationCode0
SOS! Soft Prompt Attack Against Open-Source Large Language Models0
Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated LearningCode0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift0
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Backdooring Bias into Text-to-Image ModelsCode0
Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach0
Federated Learning with Flexible Architectures0
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong DetectionCode2
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning0
Show:102550
← PrevPage 3 of 11Next →

No leaderboard results yet.