SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 226250 of 523 papers

TitleStatusHype
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Backdoor Attack and Defense for Deep Regression0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Effective backdoor attack on graph neural networks in link prediction tasks0
Show:102550
← PrevPage 10 of 21Next →

No leaderboard results yet.