SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201210 of 523 papers

TitleStatusHype
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Securing Federated Learning against Backdoor Threats with Foundation Model Integration0
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
Are You Using Reliable Graph Prompts? Trojan Prompt Attacks on Graph Neural Networks0
Risk of Text Backdoor Attacks Under Dataset DistillationCode0
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations0
Show:102550
← PrevPage 21 of 53Next →

No leaderboard results yet.