SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 2650 of 492 papers

TitleStatusHype
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?Code1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Backdoor Attacks on Crowd CountingCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Autoregressive Perturbations for Data PoisoningCode1
Amplifying Membership Exposure via Data PoisoningCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Generative Poisoning Using Random DiscriminatorsCode1
Adversarial Examples Make Strong PoisonsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.