SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301350 of 492 papers

TitleStatusHype
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Robustly-reliable learners under poisoning attacks0
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation0
Indiscriminate Poisoning Attacks on Unsupervised Contrastive LearningCode1
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey0
Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Redactor: A Data-centric and Individualized Defense Against Inference Attacks0
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Towards Multi-Objective Statistically Fair Federated Learning0
How to Backdoor HyperNetwork in Personalized Federated Learning?0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks0
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
Poisoning Knowledge Graph Embeddings via Relation Inference PatternsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Get a Model! Model Hijacking Attack Against Machine Learning Models0
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Mitigating Data Poisoning in Text Classification with Differential Privacy0
Availability Attacks Create ShortcutsCode1
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Protecting Proprietary Data: Poisoning for Secure Dataset Release0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Classification Auto-Encoder based Detector against Diverse Data Poisoning AttacksCode0
Poison Ink: Robust and Invisible Backdoor AttackCode1
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Fairness-aware Summarization for Justified Decision-Making0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Examples Make Strong PoisonsCode1
Data Poisoning Won’t Save You From Facial Recognition0
Show:102550
← PrevPage 7 of 10Next →

No leaderboard results yet.