SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 101150 of 523 papers

TitleStatusHype
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?Code1
WaNet - Imperceptible Warping-based Backdoor AttackCode1
Invisible Backdoor Attack against Self-supervised LearningCode1
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural NetworkCode1
Mask-based Invisible Backdoor Attacks on Object DetectionCode1
Beyond Traditional Threats: A Persistent Backdoor Attack on Federated LearningCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
Triggerless Backdoor Attack for NLP Tasks with Clean LabelsCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Imperceptible and Robust Backdoor Attack in 3D Point CloudCode1
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Backdoor Attack against Speaker VerificationCode1
BadEdit: Backdooring large language models by model editingCode1
Backdoor Attack in the Physical World0
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements0
Attack On Prompt: Backdoor Attack in Prompt-Based Continual Learning0
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts0
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks0
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models0
BadHMP: Backdoor Attack against Human Motion Prediction0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
An Invisible Backdoor Attack Based On Semantic Feature0
BadNL: Backdoor Attacks Against NLP Models0
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models0
A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers0
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Backdoor Attack and Defense for Deep Regression0
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection0
Backdoor Attack Against Vision Transformers via Attention Gradient-Based Image Erosion0
Effective backdoor attack on graph neural networks in link prediction tasks0
BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World0
AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection0
Data-centric NLP Backdoor Defense from the Lens of Memorization0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
BadCLM: Backdoor Attack in Clinical Language Models for Electronic Health Records0
An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers0
BadApex: Backdoor Attack Based on Adaptive Optimization Mechanism of Black-box Large Language Models0
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense0
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks0
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models0
BAAAN: Backdoor Attacks Against Auto-encoder and GAN-Based Machine Learning Models0
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Show:102550
← PrevPage 3 of 11Next →

No leaderboard results yet.