SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 4150 of 492 papers

TitleStatusHype
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning in Deep Learning: A SurveyCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
Adversarial Examples Make Strong PoisonsCode1
Autoregressive Perturbations for Data PoisoningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 5 of 50Next →

No leaderboard results yet.