SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 151200 of 523 papers

TitleStatusHype
Backdooring Outlier Detection Methods: A Novel Attack Approach0
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations0
A Knowledge Distillation-Based Backdoor Attack in Federated Learning0
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning0
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers0
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation0
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models0
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection0
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers0
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
Backdoor Detection through Replicated Execution of Outsourced Training0
A Survey on Backdoor Attack and Defense in Natural Language Processing0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
A clean-label graph backdoor attack method in node classification task0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration0
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning0
Dynamic Backdoor Attacks Against Machine Learning Models0
CUBA: Controlled Untargeted Backdoor Attack against Deep Neural Networks0
Backdoor Attack with Mode Mixture Latent Modification0
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning0
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World0
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs0
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning0
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction0
Cooperative Backdoor Attack in Decentralized Reinforcement Learning with Theoretical Guarantee0
Contributor-Aware Defenses Against Adversarial Backdoor Attacks0
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data0
Debiasing Backdoor Attack: A Benign Application of Backdoor Attack in Eliminating Data Bias0
Backdoor Attack with Imperceptible Input and Latent Modification0
Deep Learning Backdoors0
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection0
Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer0
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attack on Graph Nerual Network by Explainability0
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Defending against Backdoor Attacks in Natural Language Generation0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System0
A semantic backdoor attack against Graph Convolutional Networks0
Show:102550
← PrevPage 4 of 11Next →

No leaderboard results yet.