SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 121130 of 492 papers

TitleStatusHype
Data-Dependent Stability Analysis of Adversarial Training0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Data Poisoning Won’t Save You From Facial Recognition0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Show:102550
← PrevPage 13 of 50Next →

No leaderboard results yet.