SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201210 of 523 papers

TitleStatusHype
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural BackdoorCode0
EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding InspectionCode0
Defending Neural Backdoors via Generative Distribution ModelingCode0
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Energy Backdoor Attack to Deep Neural NetworksCode0
Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion ModelsCode0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosCode0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
Show:102550
← PrevPage 21 of 53Next →

No leaderboard results yet.