SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 231240 of 523 papers

TitleStatusHype
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Show:102550
← PrevPage 24 of 53Next →

No leaderboard results yet.