SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 451460 of 523 papers

TitleStatusHype
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Dynamic Backdoor Attacks Against Machine Learning Models0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
Show:102550
← PrevPage 46 of 53Next →

No leaderboard results yet.