SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 371380 of 492 papers

TitleStatusHype
On the Effectiveness of Poisoning against Unsupervised Domain Adaptation0
Data Poisoning Won’t Save You From Facial Recognition0
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers0
Gradient-based Data Subversion Attack Against Binary Classifiers0
A Gradient Method for Multilevel Optimization0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Fooling Partial Dependence via Data PoisoningCode0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Incompatibility Clustering as a Defense Against Backdoor Poisoning AttacksCode0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Show:102550
← PrevPage 38 of 50Next →

No leaderboard results yet.