SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 51100 of 492 papers

TitleStatusHype
IMMA: Immunizing text-to-image Models against Malicious AdaptationCode1
Availability Attacks Create ShortcutsCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Autoregressive Perturbations for Data PoisoningCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Crowd CountingCode1
Data Poisoning in Deep Learning: A SurveyCode1
Generative Poisoning Using Random DiscriminatorsCode1
MetaPoison: Practical General-purpose Clean-label Data PoisoningCode1
Poisoning Web-Scale Training Datasets is PracticalCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Not All Poisons are Created Equal: Robust Training against Data PoisoningCode1
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
Learning from Convolution-based Unlearnable DatasetsCode0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Lethal Dose Conjecture on Data PoisoningCode0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
Certified Defenses for Data Poisoning AttacksCode0
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?Code0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Show:102550
← PrevPage 2 of 10Next →

No leaderboard results yet.