SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 351400 of 492 papers

TitleStatusHype
On the Effectiveness of Poisoning against Unsupervised Domain Adaptation0
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers0
Gradient-based Data Subversion Attack Against Binary Classifiers0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
A Gradient Method for Multilevel Optimization0
Fooling Partial Dependence via Data PoisoningCode0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Incompatibility Clustering as a Defense Against Backdoor Poisoning AttacksCode0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?Code0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
Robust learning under clean-label attack0
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data PoisoningCode1
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models0
Data Poisoning Attacks and Defenses to Crowdsourcing Systems0
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release0
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Reinforcement Learning For Data Poisoning on Graph Neural Networks0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Generating Fake Cyber Threat Intelligence Using Transformer-Based Models0
Property Inference From Poisoning0
Adversarial Vulnerability of Active Transfer Learning0
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
Sself: Robust Federated Learning against Stragglers and Adversaries0
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
Federated Unlearning0
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Mitigating the Impact of Adversarial Attacks in Very Deep Networks0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffCode1
Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems0
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
Concealed Data Poisoning Attacks on NLP Models0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
VenoMave: Targeted Poisoning Against Speech RecognitionCode0
Sniper GMMs: Structured Gaussian mixtures poison ML on large n small p data with high efficacy0
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Show:102550
← PrevPage 8 of 10Next →

No leaderboard results yet.