SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251275 of 492 papers

TitleStatusHype
Unlearnable Clusters: Towards Label-agnostic Unlearnable ExamplesCode1
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksCode1
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning0
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Generative Poisoning Using Random DiscriminatorsCode1
Amplifying Membership Exposure via Data PoisoningCode1
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification0
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Not All Poisons are Created Equal: Robust Training against Data PoisoningCode1
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?Code1
Detecting Backdoors in Deep Text Classifiers0
On Optimal Learning Under Targeted Data Poisoning0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Data Poisoning Attacks Against Multimodal EncodersCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Show:102550
← PrevPage 11 of 20Next →

No leaderboard results yet.