SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 111120 of 523 papers

TitleStatusHype
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
Uncertainty is Fragile: Manipulating Uncertainty in Large Language ModelsCode1
Backdoor Attack against Speaker VerificationCode1
BadEdit: Backdooring large language models by model editingCode1
Backdoor Attack in the Physical World0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
BadHMP: Backdoor Attack against Human Motion Prediction0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
Show:102550
← PrevPage 12 of 53Next →

No leaderboard results yet.