SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 76100 of 523 papers

TitleStatusHype
PBP: Post-training Backdoor Purification for Malware ClassifiersCode0
Behavior Backdoor for Deep Learning Models0
Streamlined Federated Unlearning: Unite as One to Be Highly Efficient0
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm0
BadScan: An Architectural Backdoor Attack on Visual State Space Models0
BadSFL: Backdoor Attack against Scaffold Federated Learning0
LoBAM: LoRA-Based Backdoor Attack on Model Merging0
Memory Backdoor Attacks on Neural Networks0
AnywhereDoor: Multi-Target Backdoor Attacks on Object DetectionCode0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
TrojanRobot: Physical-World Backdoor Attacks Against VLM-based Robotic Manipulation0
Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models0
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Backdoor Attack on Vertical Federated Graph Neural Network Learning0
Show:102550
← PrevPage 4 of 21Next →

No leaderboard results yet.