SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 176200 of 492 papers

TitleStatusHype
BrainWash: A Poisoning Attack to Forget in Continual Learning0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
Fed-Credit: Robust Federated Learning with Credibility Management0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
A Robust Attack: Displacement Backdoor Attack0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Face Recognition in the age of CLIP & Billion image datasets0
Fairness-aware Summarization for Justified Decision-Making0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending against Backdoor Attack on Deep Neural Networks0
Show:102550
← PrevPage 8 of 20Next →

No leaderboard results yet.