SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 2130 of 492 papers

TitleStatusHype
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Autoregressive Perturbations for Data PoisoningCode1
Adversarial Examples Make Strong PoisonsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Show:102550
← PrevPage 3 of 50Next →

No leaderboard results yet.