SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 2650 of 492 papers

TitleStatusHype
How To Backdoor Federated LearningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Indiscriminate Poisoning Attacks on Unsupervised Contrastive LearningCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Amplifying Membership Exposure via Data PoisoningCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning in Deep Learning: A SurveyCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
Adversarial Examples Make Strong PoisonsCode1
Autoregressive Perturbations for Data PoisoningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.