SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251300 of 492 papers

TitleStatusHype
Transferable Availability Poisoning AttacksCode0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Towards Poisoning Fair Representations0
Post-Training Overfitting Mitigation in DNN Classifiers0
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained ModelsCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
Systematic Testing of the Data-Poisoning Robustness of KNN0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks0
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
Interactive System-wise Anomaly Detection0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch SamplingCode0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
PORE: Provably Robust Recommender Systems against Data Poisoning AttacksCode0
Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning0
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
WW-FL: Secure and Private Large-Scale Federated Learning0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Data Poisoning Attacks on EEG Signal-based Risk Assessment Systems0
Temporal Robustness against Data Poisoning0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Face Recognition in the age of CLIP & Billion image datasets0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application0
Computation and Data Efficient Backdoor Attacks0
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning0
Show:102550
← PrevPage 6 of 10Next →

No leaderboard results yet.