SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 131140 of 523 papers

TitleStatusHype
Effective backdoor attack on graph neural networks in link prediction tasks0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
Show:102550
← PrevPage 14 of 53Next →

No leaderboard results yet.