SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 171180 of 492 papers

TitleStatusHype
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Show:102550
← PrevPage 18 of 50Next →

No leaderboard results yet.