SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 381390 of 492 papers

TitleStatusHype
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?Code0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
Robust learning under clean-label attack0
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models0
Data Poisoning Attacks and Defenses to Crowdsourcing Systems0
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release0
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Reinforcement Learning For Data Poisoning on Graph Neural Networks0
Show:102550
← PrevPage 39 of 50Next →

No leaderboard results yet.