SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 326350 of 492 papers

TitleStatusHype
Availability Attacks Create ShortcutsCode1
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Protecting Proprietary Data: Poisoning for Secure Dataset Release0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Classification Auto-Encoder based Detector against Diverse Data Poisoning AttacksCode0
Poison Ink: Robust and Invisible Backdoor AttackCode1
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Fairness-aware Summarization for Justified Decision-Making0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Examples Make Strong PoisonsCode1
Data Poisoning Won’t Save You From Facial Recognition0
Show:102550
← PrevPage 14 of 20Next →

No leaderboard results yet.