SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 441450 of 492 papers

TitleStatusHype
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Proving Data-Poisoning Robustness in Decision Trees0
Data Poisoning Attacks on Neighborhood-based Recommender Systems0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Shapley Homology: Topological Analysis of Sample Influence for Neural Networks0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Deep k-NN Defense against Clean-label Data Poisoning AttacksCode0
Show:102550
← PrevPage 45 of 50Next →

No leaderboard results yet.