SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 201250 of 523 papers

TitleStatusHype
Going In Style: Audio Backdoors Through Stylistic TransformationsCode0
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade DefenseCode0
FooBaR: Fault Fooling Backdoor Attack on Neural Network TrainingCode0
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion ModelsCode0
Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIPCode0
Adversarial Feature Map Pruning for BackdoorCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Few-shot Backdoor Attacks via Neural Tangent KernelsCode0
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated LearningCode0
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement LearningCode0
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation ModelsCode0
BadRefSR: Backdoor Attacks Against Reference-based Image Super ResolutionCode0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
Backdoor Attack in the Physical World0
BadNL: Backdoor Attacks Against NLP Models0
Federated Learning with Flexible Architectures0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadHMP: Backdoor Attack against Human Motion Prediction0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Backdoor Attack and Defense for Deep Regression0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models0
EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Effective backdoor attack on graph neural networks in link prediction tasks0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.