SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 5175 of 492 papers

TitleStatusHype
Poisoning Knowledge Graph Embeddings via Relation Inference PatternsCode1
Poisoning Web-Scale Training Datasets is PracticalCode1
Data Poisoning in Deep Learning: A SurveyCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Autoregressive Perturbations for Data PoisoningCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection SystemsCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Crowd CountingCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Generative Poisoning Using Random DiscriminatorsCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient ShapingCode1
Show:102550
← PrevPage 3 of 20Next →

No leaderboard results yet.