SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 3140 of 492 papers

TitleStatusHype
Data Poisoning Won't Save You From Facial RecognitionCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Amplifying Membership Exposure via Data PoisoningCode1
Backdoor Attacks on Crowd CountingCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 4 of 50Next →

No leaderboard results yet.