SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251275 of 492 papers

TitleStatusHype
Transferable Availability Poisoning AttacksCode0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Towards Poisoning Fair Representations0
Post-Training Overfitting Mitigation in DNN Classifiers0
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained ModelsCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
Systematic Testing of the Data-Poisoning Robustness of KNN0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Show:102550
← PrevPage 11 of 20Next →

No leaderboard results yet.