SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 431440 of 492 papers

TitleStatusHype
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Detecting Backdoors in Deep Text Classifiers0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Show:102550
← PrevPage 44 of 50Next →

No leaderboard results yet.