SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 261270 of 492 papers

TitleStatusHype
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Generative Poisoning Using Random DiscriminatorsCode1
Amplifying Membership Exposure via Data PoisoningCode1
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification0
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Not All Poisons are Created Equal: Robust Training against Data PoisoningCode1
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?Code1
Show:102550
← PrevPage 27 of 50Next →

No leaderboard results yet.