SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201225 of 523 papers

TitleStatusHype
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Backdoor Attack on Vertical Federated Graph Neural Network Learning0
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models0
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models0
"No Matter What You Do": Purifying GNN Models via Backdoor UnlearningCode0
BadHMP: Backdoor Attack against Human Motion Prediction0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery0
TrojVLM: Backdoor Attack Against Vision Language Models0
Weak-to-Strong Backdoor Attack for Large Language Models0
Claim-Guided Textual Backdoor Attack for Practical ApplicationsCode0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated LearningCode0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Show:102550
← PrevPage 9 of 21Next →

No leaderboard results yet.