SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 4150 of 492 papers

TitleStatusHype
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?Code1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
Backdoor Attacks on Crowd CountingCode1
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection SystemsCode1
Autoregressive Perturbations for Data PoisoningCode1
Indiscriminate Poisoning Attacks on Unsupervised Contrastive LearningCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Show:102550
← PrevPage 5 of 50Next →

No leaderboard results yet.