SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401450 of 492 papers

TitleStatusHype
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks0
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Defending Regression Learners Against Poisoning AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Practical Poisoning Attacks on Neural Networks0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Backdoor Learning: A SurveyCode2
Data Poisoning Attacks Against Federated Learning SystemsCode1
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Subpopulation Data Poisoning AttacksCode0
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
On Adversarial Bias and the Robustness of Fair Machine LearningCode0
Robust Variational Autoencoder for Tabular Data with Beta Divergence0
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Online Data Poisoning Attacks0
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers0
Data Poisoning Attacks on Federated Machine Learning0
Practical Data Poisoning Attack against Next-Item Recommendation0
MetaPoison: Practical General-purpose Clean-label Data PoisoningCode1
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks0
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM0
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation0
Defending against Backdoor Attack on Deep Neural Networks0
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient ShapingCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Radioactive data: tracing through trainingCode1
Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Proving Data-Poisoning Robustness in Decision Trees0
Data Poisoning Attacks on Neighborhood-based Recommender Systems0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Shapley Homology: Topological Analysis of Sample Influence for Neural Networks0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Deep k-NN Defense against Clean-label Data Poisoning AttacksCode0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.