SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201210 of 492 papers

TitleStatusHype
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Federated Multi-Armed Bandits Under Byzantine Attacks0
Federated Learning with Dual Attention for Robust Modulation Classification under Attacks0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
Fed-Credit: Robust Federated Learning with Credibility Management0
Federated Unlearning0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks0
Show:102550
← PrevPage 21 of 50Next →

No leaderboard results yet.