SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 426450 of 492 papers

TitleStatusHype
Subpopulation Data Poisoning AttacksCode0
On Adversarial Bias and the Robustness of Fair Machine LearningCode0
Robust Variational Autoencoder for Tabular Data with Beta Divergence0
Online Data Poisoning Attacks0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers0
Data Poisoning Attacks on Federated Machine Learning0
Practical Data Poisoning Attack against Next-Item Recommendation0
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks0
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM0
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation0
Defending against Backdoor Attack on Deep Neural Networks0
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Proving Data-Poisoning Robustness in Decision Trees0
Data Poisoning Attacks on Neighborhood-based Recommender Systems0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Show:102550
← PrevPage 18 of 20Next →

No leaderboard results yet.