SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 351400 of 523 papers

TitleStatusHype
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
MSDT: Masked Language Model Scoring Defense in Text DomainCode0
Going In Style: Audio Backdoors Through Stylistic TransformationsCode0
Untargeted Backdoor Attack against Object DetectionCode1
BATT: Backdoor Attack with Transformation-based Triggers0
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via MotifsCode0
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningCode1
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning0
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis0
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class0
An Embarrassingly Simple Backdoor Attack on Self-supervised LearningCode1
Understanding Impacts of Task Similarity on Backdoor Attack and Detection0
Few-shot Backdoor Attacks via Neural Tangent KernelsCode0
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning DatasetsCode1
Where to Attack: A Dynamic Locator Model for Backdoor Attack in Text ClassificationsCode0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
TrojViT: Trojan Insertion in Vision TransformersCode1
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Bidirectional Contrastive Split Learning for Visual Question Answering0
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNNCode0
Imperceptible and Robust Backdoor Attack in 3D Point CloudCode1
Link-Backdoor: Backdoor Attack on Link Prediction via Node InjectionCode0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair0
Versatile Weight Attack via Flipping Limited BitsCode0
Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment0
Backdoor Attacks on Crowd CountingCode1
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning0
Defending Backdoor Attacks on Vision Transformer via Patch Processing0
Transferable Graph Backdoor Attack0
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection0
Neurotoxin: Durable Backdoors in Federated LearningCode1
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
BadDet: Backdoor Attacks on Object DetectionCode0
BagFlip: A Certified Defense against Data PoisoningCode0
BITE: Textual Backdoor Attacks with Iterative Trigger InjectionCode0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution0
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Model-Contrastive Learning for Backdoor DefenseCode0
Imperceptible Backdoor Attack: From Input Space to Feature RepresentationCode1
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained ModelsCode0
Show:102550
← PrevPage 8 of 11Next →

No leaderboard results yet.