SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 301310 of 523 papers

TitleStatusHype
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
VSVC: Backdoor attack against Keyword Spotting based on Voiceprint Selection and Voice Conversion0
Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks0
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks0
Weak-to-Strong Backdoor Attack for Large Language Models0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Widen The Backdoor To Let More Attackers In0
You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks?0
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers0
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models0
Show:102550
← PrevPage 31 of 53Next →

No leaderboard results yet.