SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401410 of 492 papers

TitleStatusHype
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks0
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Defending Regression Learners Against Poisoning AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Practical Poisoning Attacks on Neural Networks0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Backdoor Learning: A SurveyCode2
Show:102550
← PrevPage 41 of 50Next →

No leaderboard results yet.