SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 221230 of 523 papers

TitleStatusHype
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
Backdoor Attacks with Input-unique Triggers in NLP0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
A4O: All Trigger for One sample0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Show:102550
← PrevPage 23 of 53Next →

No leaderboard results yet.