SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 421430 of 492 papers

TitleStatusHype
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
Data Poisoning Won’t Save You From Facial Recognition0
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses0
Data Shifts Hurt CoT: A Theoretical Study0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
Deep Learning Model Security: Threats and Defenses0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Defend Data Poisoning Attacks on Voice Authentication0
Show:102550
← PrevPage 43 of 50Next →

No leaderboard results yet.