SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 1120 of 492 papers

TitleStatusHype
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Autoregressive Perturbations for Data PoisoningCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Backdoor Attacks on Crowd CountingCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Amplifying Membership Exposure via Data PoisoningCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Show:102550
← PrevPage 2 of 50Next →

No leaderboard results yet.